All IPs > Platform Level IP > Processor Core Dependent
In the realm of semiconductor IP, the Processor Core Dependent category encompasses a variety of intellectual properties specifically designed to enhance and support processor cores. These IPs are tailored to work in harmony with core processors to optimize their performance, adding value by reducing time-to-market and improving efficiency in modern integrated circuits. This category is crucial for the customization and adaptation of processors to meet specific application needs, addressing both performance optimization and system complexity management.
Processor Core Dependent IPs are integral components, typically found in applications that require robust data processing capabilities such as smartphones, tablets, and high-performance computing systems. They can also be implemented in embedded systems for automotive, industrial, and IoT applications, where precision and reliability are paramount. By providing foundational building blocks that are pre-verified and configurable, these semiconductor IPs significantly simplify the integration process within larger digital systems, enabling a seamless enhancement of processor capabilities.
Products in this category may include cache controllers, memory management units, security hardware, and specialized processing units, all designed to complement and extend the functionality of processor cores. These solutions enable system architects to leverage existing processor designs while incorporating cutting-edge features and optimizations tailored to specific application demands. Such customizations can significantly boost the performance, energy efficiency, and functionality of end-user devices, translating into better user experiences and competitive advantages.
In essence, Processor Core Dependent semiconductor IPs represent a strategic approach to processor design, providing a toolkit for customization and optimization. By focusing on interdependencies within processing units, these IPs allow for the creation of specialized solutions that cater to the needs of various industries, ensuring the delivery of high-performance, reliable, and efficient computing solutions. As the demand for sophisticated digital systems continues to grow, the importance of these IPs in maintaining competitive edge cannot be overstated.
The NMP-750 is designed as a cutting-edge performance accelerator for edge computing, tailored to address challenges in sectors like automotive, telecommunications, and smart factories. This product offers ample support for mobility, autonomous control, and process automation, setting a benchmark in high-performance computing for varied applications. With a processing power of up to 16 TOPS and 16 MB of local memory, it supports RISC-V/Arm Cortex-R or A 32-bit CPUs for substantial computational tasks. Its architecture supports a rich set of applications, including multi-camera stream processing and energy management, enabled through its AXI4 128-bit interfaces that manage extensive data traffic efficiently. This accelerator is particularly suited for complex scenarios such as spectral efficiency and smart building management, offering unparalleled performance capabilities. Designed for scalability and reliability, the NMP-750 reaches beyond traditional computing barriers, ensuring outstanding performance in real-time applications and next-gen technology deployments.
The NMP-350 is specifically designed to serve as a cost-effective endpoint accelerator with a strong emphasis on low power consumption, making it ideal for various applications in AIoT, automotive, and smart appliances. This product is equipped with a robust architecture to facilitate myriad applications, such as driver authentication, digital mirrors, and predictive maintenance, while ensuring efficient resource management. Capable of delivering up to 1 TOPS, the NMP-350 integrates up to 1 MB of local memory, supporting RISC-V/Arm Cortex-M 32-bit CPU cores. It utilizes a triple AXI4 interface, each with a capacity of 128 bits, to manage host, CPU, and data traffic seamlessly. This architecture supports a host of applications in wearables, Industry 4.0, and health monitoring, adding significant value to futuristic technology solutions. Strategically targeting markets like AIoT/sensors and smart appliances, the NMP-350 positions itself as a favored choice for developing low-cost, power-sensitive device solutions. As industries gravitate toward energy-efficient technologies, products like NMP-350 offer a competitive edge in facilitating smart, green development processes.
Designed for high-performance applications, the Metis AIPU PCIe AI Accelerator Card employs four Metis AI Processing Units to deliver exceptional computational power. With its ability to reach up to 856 TOPS, this card is tailored for demanding vision applications, making it suitable for real-time processing of multi-channel video data. The PCIe form factor ensures easy integration into existing systems, while the customized software platform simplifies the deployment of neural networks for tasks like YOLO object detection. This accelerator card ensures scalability and efficiency, allowing developers to implement AI applications that are both powerful and cost-effective. The card’s architecture also takes advantage of RISC-V and Digital-In-Memory Computing technologies, bringing substantial improvements in speed and power efficiency.
The Origin E1 neural engines by Expedera redefine efficiency and customization for low-power AI solutions. Specially crafted for edge devices like home appliances and security cameras, these engines serve ultra-low power applications that demand continuous sensing capabilities. They minimize power consumption to as low as 10-20mW, keeping data secure and eliminating the need for external memory access. The advanced packet-based architecture enhances performance by facilitating parallel layer execution, thereby optimizing resource utilization. Designed to be a perfect fit for dedicated AI functions, Origin E1 is tailored to support specific neural networks efficiently while reducing silicon area and system costs. It supports various neural networks, from CNNs to RNNs, making it versatile for numerous applications. This engine is also one of the most power-efficient in the industry, boasting an impressive 18 TOPS per Watt. Origin E1 also offers a full TVM-based software stack for easy integration and performance optimization across customer platforms. It supports a wide array of data types and networks, ensuring flexibility and sustained power efficiency, averaging 80% utilization. This makes it a reliable choice for OEMs looking for high performance in always-sensing applications, offering a competitive edge in both power efficiency and security.
The Veyron V2 CPU takes the innovation witnessed in its predecessor and propels it further, offering unparalleled performance for AI and data center-class applications. This successor to the V1 CPU integrates seamlessly into environments requiring high computational power and efficiency, making it perfect for modern data challenges. Built upon RISC-V's architecture, it provides an open-standard alternative to traditional closed processor models. With a heavy emphasis on AI and machine learning workloads, Veyron V2 is designed to excel in handling complex data-centric tasks. This CPU can quickly adapt to multifaceted requirements, proving indispensable from enterprise servers to hyperscale data centers. Its superior design enables it to outperform many contemporary alternatives, positioning it as a lead component for next-generation computing solutions. The processor's adaptability allows for rapid and smooth integration into existing systems, facilitating quick upgrades and enhancements tailored to specific operational needs. As the Veyron V2 CPU is highly energy-efficient, it empowers data centers to achieve greater sustainability benchmarks without sacrificing performance.
The Metis AIPU M.2 Accelerator Module is a cutting-edge AI processing unit designed to boost the performance of edge computing tasks. This module integrates seamlessly with innovative applications, offering a robust solution for inference at the edge. It excels in vision AI tasks with its dedicated 512MB LPDDR4x memory, providing the necessary storage for complex tasks. Offering unmatched energy efficiency, the Metis AIPU M.2 module is capable of delivering significant performance gains while maintaining minimal power consumption. At an accessible price point, this module opens up AI processing capabilities for a variety of applications. As an essential component of next-generation vision processing systems, it is ideal for industries seeking to implement AI technologies swiftly and effectively.
The Origin E8 NPUs represent Expedera's cutting-edge solution for environments demanding the utmost in processing power and efficiency. This high-performance core scales its TOPS capacity between 32 and 128 with single-core configurations, addressing complex AI tasks in automotive and data-centric operational settings. The E8’s architecture stands apart due to its capability to handle multiple concurrent tasks without any compromise in performance. This unit adopts Expedera's signature packet-based architecture for optimized parallel execution and resource management, removing the necessity for hardware-specific tweaks. The Origin E8 also supports high input resolutions up to 8K and integrates well across standard and custom neural networks, enhancing its utility in future-forward AI applications. Leveraging a flexible, scalable design, the E8 IP cores make use of an exhaustive software suite to augment AI deployment. Field-proven and already deployed in a multitude of consumer vehicles, Expedera's Origin E8 provides a robust, reliable choice for developers needing optimized AI inference performance, ideally suited for data centers and high-power automobile systems.
The SCR9 Processor Core is a cutting-edge processor designed for entry-level server-class and personal computing applications. Featuring a 12-stage dual-issue out-of-order pipeline, it supports robust RISC-V extensions including vector operations and a high-complexity memory system. This core is well-suited for high-performance computing, offering exceptional power efficiency with multicore coherence and the ability to integrate accelerators, making it suitable for areas like AI, ML, and enterprise computing.
The High Performance RISC-V Processor from Cortus represents the forefront of high-end computing, designed for applications demanding exceptional processing speeds and throughput. It features an out-of-order execution core that supports both single-core and multi-core configurations for diverse computing environments. This processor specializes in handling complex tasks requiring multi-threading and cache coherency, making it suitable for applications ranging from desktops and laptops to high-end servers and supercomputers. It includes integrated vector and AI accelerators, enhancing its capability to manage intensive data-processing workloads efficiently. Furthermore, this RISC-V processor is adaptable for advanced embedded systems, including automotive central units and AI applications in ADAS, providing enormous potential for innovation and performance across various markets.
Origin E2 NPUs focus on delivering power and area efficiency, making them ideal for on-device AI applications in smartphones and edge nodes. These processing units support a wide range of neural networks, including video, audio, and text-based applications, all while maintaining impressive performance metrics. The unique packet-based architecture ensures effective performance with minimal latency and eliminates the need for hardware-specific optimizations. The E2 series offers customization options allowing it to fit specific application needs perfectly, with configurations supporting up to 20 TOPS. This flexibility represents significant design advancements that help increase processing efficiency without introducing latency penalties. Expedera's power-efficient design results in NPUs with industry-leading performance at 18 TOPS per Watt. Further augmenting the value of E2 NPUs is their ability to run multiple neural network types efficiently, including LLMs, CNNs, RNNs, and others. The IP is field-proven, deployed in over 10 million consumer devices, reinforcing its reliability and effectiveness in real-world applications. This makes the Origin E2 an excellent choice for companies aiming to enhance AI capabilities while managing power and area constraints effectively.
The AndeShape Platforms are designed to streamline system development by providing a diverse suite of IP solutions for SoC architecture. These platforms encompass a variety of product categories, including the AE210P for microcontroller applications, AE300 and AE350 AXI fabric packages for scalable SoCs, and AE250 AHB platform IP. These solutions facilitate efficient system integration with Andes processors. Furthermore, AndeShape offers a sophisticated range of development platforms and debugging tools, such as ADP-XC7K160/410, which reinforce the system design and verification processes, providing a comprehensive environment for the innovative realization of IoT and other embedded applications.
Cortus's Automotive AI Inference SoC is a breakthrough solution tailored for autonomous driving and advanced driver assistance systems. This SoC combines efficient image processing with AI inference capabilities, optimized for city infrastructure and mid-range vehicle markets. Built on a RISC-V architecture, the AI Inference SoC is capable of running specialized algorithms, akin to those in the Yolo series, for fast and accurate image recognition. Its low power consumption makes it suitable for embedded automotive applications requiring enhanced processing without compromising energy efficiency. This chip demonstrates its adequacy for Level 2 and Level 4 autonomous driving systems, providing a comprehensive AI-driven platform that enhances safety and operational capabilities in urban settings.
The Chimera GPNPU by Quadric is a versatile processor specifically designed to enhance machine learning inference tasks on a broad range of devices. It provides a seamless blend of traditional digital signal processing (DSP) and neural processing unit (NPU) capabilities, which allow it to handle complex ML networks alongside conventional C++ code. Designed with a focus on adaptability, the Chimera GPNPU architecture enables easy porting of various models and software application programming, making it a robust solution for rapidly evolving AI technologies. A key feature of the Chimera GPNPU is its scalable design, which extends from 1 to a remarkable 864 TOPs, catering to applications from standard to advanced high-performance requirements. This scalability is coupled with its ability to support a broad range of ML networks, such as classic backbones, vision transformers, and large language models, fulfilling various computational needs across industries. The Chimera GPNPU also excels in automotive applications, including ADAS and ECU systems, due to its ASIL-ready design. The processor's hybrid architecture merges Von Neumann and 2D SIMD matrix capabilities, promoting efficient execution of scalar, vector, and matrix operations. It boasts a deterministic execution pipeline and extensive customization options, including configurable instruction caches and local register memories that optimize memory usage and power efficiency. This design effectively reduces off-chip memory accesses, ensuring high performance while minimizing power consumption.
The NMP-550 stands out as a performance-focused accelerator catered towards applications necessitating high efficiency, especially in demanding fields such as automotive, drones, and AR/VR. This technology caters to various application needs, including driver monitoring, image/video analytics, and heightened security measures through its powerful architecture and processing capability. Boasting a significant computation potential of up to 6 TOPS, the NMP-550 includes up to 6 MB of local memory. Featuring RISC-V/Arm Cortex-M or A 32-bit CPUs, the product ensures robust processing for advanced applications. The triple AXI4 interface provides a seamless 128-bit data exchange across hosts, CPUs, and data channels, magnifying flexibility for technology integrators. Ideal for medical devices, this product also expands its utility into security and surveillance, supporting crucial processes like super-resolution and fleet management. Its comprehensive design and efficiency make it an optimal choice for applications demanding elevated performance within constrained resources.
The xcore.ai platform by XMOS Semiconductor is a sophisticated and cost-effective solution aimed specifically at intelligent IoT applications. Harnessing a unique multi-threaded micro-architecture, xcore.ai provides superior low latency and highly predictable performance, tailored for diverse industrial needs. It is equipped with 16 logical cores divided across two multi-threaded processing tiles. These tiles come enhanced with 512 kB of SRAM and a vector unit supporting both integer and floating-point operations, allowing it to process both simple and complex computational demands efficiently. A key feature of the xcore.ai platform is its powerful interprocessor communication infrastructure, which enables seamless high-speed communication between processors, facilitating ultimate scalability across multiple systems on a chip. Within this homogeneous environment, developers can comfortably integrate DSP, AI/ML, control, and I/O functionalities, allowing the device to adapt to specific application requirements efficiently. Moreover, the software-defined architecture allows optimal configuration, reducing power consumption and achieving cost-effective intelligent solutions. The xcore.ai platform shows impressive DSP capabilities, thanks to its scalar pipeline that achieves up to 32-bit floating-point operations and peak performance rates of up to 1600 MFLOPS. AI/ML capabilities are also robust, with support for various bit vector operations, making the platform a strong contender for AI applications requiring homogeneous computing environments and exceptional operator integration.
The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.
AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.
Inicore’s iniDSP is a dynamic 16-bit digital signal processor core engineered for high-performance signal processing applications across a spectrum of fields including audio, telecommunications, and industrial automation. It leverages efficient computation capabilities to manage complex algorithms and data-intensive tasks, making it an ideal choice for all DSP needs. The iniDSP is designed around a scalable architecture, permitting customization to fit specific processing requirements. It ensures optimal performance whether interpreting audio signals, processing image data, or implementing control algorithms. The flexibility of this DSP core is evident in its seamless transition from simulation environments to real-world applications, supporting rapid prototyping and effective deployment. Inicore’s dedication to delivering robust processing solutions is epitomized in the iniDSP's ability to manage extensive DSP tasks with precision and speed. This makes it a valuable component for developers looking to amplify signal processing capabilities and achieve higher efficiency in their system designs.
Ventana's System IP product suite is crucial for integrating the Veyron CPUs into a cohesive RISC-V based high-performance system. This integration set ensures the smooth operation and optimization of Ventana's processors, enhancing their applicability across various computational tasks. Particularly relevant for data centers and enterprise settings, this suite includes essential components such as IOMMU and CPX interfaces to streamline multiple workloads management efficiently. These systems IP products are built with a focus on optimized communication and processing efficiency, making them integral in achieving superior data throughput and system reliability. The design encompasses the necessities for robust virtualization and resource allocation, making it ideally suited for high-demand data environments requiring meticulous coordination between system components. By leveraging Ventana's System IP, users can ensure that their processors meet and exceed the performance needs typical in today's cloud-intensive and server-heavy operations. This capability makes the System IP a foundational element in creating a performance-optimized technology stack capable of sustaining diverse, modern technological demands.
The Veyron V1 CPU represents an efficient, high-performance processor tailored to address a myriad of data center demands. As an advanced RISC-V architecture processor, it stands out by offering competitive performance compatible with the most current data center workloads. Designed to excel in efficiency, it marries performance with a sustainable energy profile, allowing for optimal deployment in various demanding environments. This processor brings flexibility to developers and data center operators by providing extensive customization options. Veyron V1's robust architecture is meant to enhance throughput and streamline operations, facilitating superior service provision across cloud infrastructures. Its compatibility with diverse integration requirements makes it ideal for a broad swath of industrial uses, encouraging scalability and robust data throughput. Adaptability is a key feature of Veyron V1 CPU, making it a preferred choice for enterprises looking to leverage RISC-V's open standards and extend the performance of their platforms. It aligns seamlessly with Ventana's broader ecosystem of products, creating excellence in workload delivery and resource management within hyperscale and enterprise environments.
Dream Chip Technologies' Arria 10 System on Module (SoM) emphasizes embedded and automotive vision applications. Utilizing Altera's Arria 10 SoC Devices, the SoM is compact yet packed with powerful capabilities. It features a dual-core Cortex A9 CPU and supports up to 480 KLEs of FPGA logic elements, providing ample space for customization and processing tasks. The module integrates robust power management features to ensure efficient energy usage, with interfaces for DDR4 memory, PCIe Gen3, Ethernet, and 12G SDI among others, housed in a form factor measuring just 8 cm by 6.5 cm. Engineered to support high-speed data processing, the Arria 10 SoM includes dual DDR4 memory interfaces and 12 transceivers at 12 Gbit/s and above. It provides comprehensive connectivity options, including two USB ports, Gigabit Ethernet, and multiple GPIOs with level-shifting capabilities. This level of integration makes it optimal for developing solutions for automotive systems, particularly in scenarios requiring high-speed data and image processing. Additionally, the SoM comes with a suite of reference designs, such as the Intel Arria 10 Golden System Reference Design, to expedite development cycles. This includes pre-configured HPS and memory controller IP, as well as customized U-Boot and Angström Linux distributions, further enriching its utility in automotive and embedded domains.
Certus Semiconductor specializes in advanced RF/Analog IP solutions, tackling the intricate needs of high-performance wireless communication systems. Their cutting-edge technology provides ultra-low power wireless front-end integration, verified across a range of silicon contexts to ensure reliability and excellence. These solutions cover a comprehensive spectrum of RF configurations from silicon-proven RF IPs to fully integrated RF transceivers used in state-of-the-art wireless devices. Features of Certus's RF/Analog solutions include finely tuned custom PLLs and LNAs with frequencies reaching up to 6GHz, tailored for superior phase noise performance and minimal jitter. This level of precision ensures optimized signal integrity and power efficiency, crucial for maintaining peak operations in wireless systems like LTE, WiFi, and GNSS. Furthermore, the innovative next-generation wireless IPs cater to ultra-low latency operations necessary for modern communication protocols, demonstrating Certus Semiconductor's commitment to driving forward-thinking technology in RF design. With an inclusive approach covering custom designs and off-the-shelf IP offerings, Certus ensures that each product meets specific project demands with exceptional precision and efficiency.
The AON1020 expands AI processing capabilities to encompass not only voice and audio recognition but also a variety of sensor applications. It leverages the power of the AONSens Neural Network cores, offering a comprehensive solution that integrates Verilog RTL technology to support both ASIC and FPGA products. Key to the AON1020's appeal is its versatility in addressing various sensor data, such as human activity detection. This makes it indispensable in applications requiring nuanced responses to environmental inputs, from motion to gesture awareness. It deploys these capabilities while minimizing energy demands, aligning perfectly with the needs of battery-operated and wearable devices. By executing real-time analytics on device-stored data, the AON1020 ensures high accuracy in environments fraught with noise and user variability. Its architecture allows it to detect multiple commands simultaneously, enhancing device interaction while maintaining low power consumption. Thus, the AON1020 is not only an innovator in sensor data interaction but also a leader in ensuring extended device functionality without compromising energy efficiency or processing accuracy.
Eliyan's NuLink Die-to-Die PHY for standard packaging is a technological innovation designed to enhance chiplet interconnectivity within conventional package forms. Tailored to seamlessly integrate with both silicon bridges and organic package substrates, this product eliminates the need for advanced packaging solutions while matching their performance characteristics. By achieving the same remarkable levels of data transfer efficiency and power optimization typically associated with advanced methods, NuLink technology stands out as a cost-effective solution for multi-die integration. Targeted for ASIC designs, the NuLink Die-to-Die PHY is capable of supporting a wide array of industry standards including UCIe and BoW. Its design enables the connection of chiplets in standard packaging without requiring large silicon interposers, ensuring both significant performance gains and cost savings. This flexibility makes it particularly appealing for systems that require mixing and matching of chiplets of varying dimensions. In practical applications, Eliyan’s solution facilitates increased placement flexibility and supports configurations that demand physical separation of components, such as those between hot ASICs and heat-sensitive dies. By leveraging a standard packaging approach, this PHY product provides substantial improvements in thermal management, cost efficiency, and production timelines compared to traditional methods.
VSORA's Tyr Superchip epitomizes high-performance capabilities tailored for the demanding worlds of autonomous driving and generative AI. With its advanced multi-core architecture, this superchip can execute any algorithm efficiently without relying on CUDA, which promotes versatility in AI deployment. Built to deliver a seamless combination of AI and general-purpose processing, the Tyr Superchip utilizes sparsity techniques, supporting quantization on-the-fly, which optimizes its performance for a wide array of computational tasks. The Tyr Superchip is distinctive for its ability to support the simultaneous execution of AI and DSP tasks, selectable on a layer-by-layer basis, which provides unparalleled flexibility in workload management. This flexibility is further complemented by its low latency and power-efficient design, boasting performance near theoretical maximums, with support for next-generation algorithms and software-defined vehicles (SDVs). Safety is prioritized with the implementation of ISO26262/ASIL-D features, making the Tyr Superchip an ideal solution for the automotive industry. Its hardware is designed to handle the computational load required for safe and efficient autonomous driving, and its programmability allows for ongoing adaptations to new automotive standards and innovations.
The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.
Avispado is a sophisticated 64-bit RISC-V core that emphasizes efficiency and adaptability within in-order execution frameworks. It's engineered to cater to energy-efficient SoC designs, making it an excellent choice for machine learning applications with its compact design and ability to seamlessly communicate with RISC-V Vector Units. By utilizing the Gazzillion Misses™ technology, the Avispado core effectively handles high sparsity in tensor weights, resulting in superior energy efficiency per operation. This core features a 2-wide in-order configuration and supports the RISC-V Vector Specification 1.0 as well as Semidynamics' Open Vector Interface. With support for large memory capacities, it includes complete MMU features and is Linux-ready, ensuring it's prepared for demanding computational tasks. The core's native CHI interface can be fine-tuned to AXI, promoting cache-coherent multiprocessing capabilities. Avispado is optimized for various demanding workloads, with optional extensions for specific needs such as bit manipulation and cryptography. The core's customizable configuration allows changes to its instruction and data cache sizes (I$ and D$ from 8KB to 32KB), ensuring it meets specific application demands while retaining operational efficiency.
The Codasip RISC-V BK Core Series offers a versatile solution for a broad range of computing needs, from low-power embedded devices to high-performance applications. These cores leverage the open RISC-V instruction set architecture (ISA), enabling designers to take advantage of expansive customization options that optimize performance and efficiency. The BK Core Series supports high configurability, allowing users to adapt the microarchitecture and extend the instruction set based on specific application demands. Incorporating advanced features like zero-overhead loops and SIMD instructions, the BK Core Series is designed to handle computationally intensive tasks efficiently. This makes them ideal for applications in audio processing, AI, and other scenarios requiring high-speed data processing and computation. Additionally, these cores are rigorously verified to meet industry standards, ensuring robustness and reliability even in the most demanding environments. The BK Core Series also aligns with Codasip's focus on functional safety and security. These processors come equipped with features to bolster system reliability, helping prevent cyber threats and errors that could lead to system malfunctions. This makes the BK Core Series an excellent choice for industries that prioritize safety, such as automotive and industrial automation.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
RegSpec is a comprehensive register specification tool that excels in generating Control Configuration and Status Register (CCSR) code. The tool is versatile, supporting various input formats like SystemRDL, IP-XACT, and custom formats via CSV, Excel, XML, or JSON. Its ability to output in formats such as Verilog RTL, System Verilog UVM code, and SystemC header files makes it indispensable for IP designers, offering extensive features for synchronization across multiple clock domains and interrupt handling. Additionally, RegSpec automates verification processes by generating UVM code and RALF files useful in firmware development and system modeling.
Bluespec's Portable RISC-V Cores offer a versatile and adaptable solution for developers seeking cross-platform compatibility with support for FPGAs from Achronix, Xilinx, Lattice, and Microsemi. These cores come with support for operating systems like Linux and FreeRTOS, providing developers with a seamless and open-source toolset for application development. By leveraging Bluespec’s extensive compatibility and open-source frameworks, developers can benefit from efficient, versatile RISC-V application deployment.
aiWare represents a specialized hardware IP core designed for optimizing neural network performance in automotive AI applications. This neural processing unit (NPU) delivers exceptional efficiency for a spectrum of AI workloads, crucial for powering automated driving systems. Its design is focused on scalability and versatility, supporting applications ranging from L2 regulatory tasks to complex multi-sensor L3+ systems, ensuring flexibility to accommodate evolving technological needs. The aiWare hardware is integrated with advanced features like industry-leading data bandwidth management and deterministic processing, ensuring high efficiency across diverse workloads. This makes it a reliable choice for automotive sectors striving for ASIL-B certification in safety-critical environments. aiWare's architecture utilizes patented dataflows to maximize performance while minimizing power consumption, critical in automotive scenarios where resource efficiency is paramount. Additionally, aiWare is supported by an innovative SDK that simplifies the development process through offline performance estimation and extensive integration tools. These capabilities reduce the dependency on low-level programming for neural network execution, streamlining development cycles and enhancing the adaptability of AI applications in automotive domains.
The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.
The nxFeed Market Data System is a high-performance FPGA-enabled feed handler that significantly improves the development and deployment of market data applications. By processing data feeds directly on FPGA hardware, nxFeed reduces latency and server load, providing reliable and swift access to critical market data. Built to complement trading applications or support in-house ticker plant development, nxFeed efficiently handles data arbitration, decoding, normalization, and order book creation with ease. This streamlined FPGA-based solution ensures minimal latency and enables developers to focus on core business logic rather than data processing bottlenecks. The system offers a versatile deployment model, adaptable via PCIe integration or through Ethernet distribution, catering to various infrastructural needs. With an easy API and swift integration capabilities, nxFeed offers a highly agile solution for firms focusing on high-frequency trading and other latency-sensitive applications.
Time-Triggered Protocol is an advanced synchronization mechanism designed for precise communication in networked systems. It establishes a highly predictable framework where data exchange is timed to occur at regular intervals, ensuring timely communication regardless of network load. This deterministic approach guarantees that information is consistently delivered at the right time, which is particularly crucial for applications that demand heightened reliability and precision. This protocol is widely adopted in applications where timing and coordination are critical. By employing a globally synchronized time base across network nodes, the Time-Triggered Protocol minimizes delays and jitter, fostering an environment of high reliability. Its design inherently supports fault-tolerant systems, increasing the dependability of networks deployed in domains such as aerospace, automotive, and industrial automation. One of the key highlights of the Time-Triggered Protocol is its ability to integrate seamlessly with various systems, maintaining synchronization and order even in complex setups. This integration capability supports scalability, allowing systems to expand without compromising the timing accuracy and integrity of communications.
The RV32EC_P2 Processor Core by IQonIC Works is a streamlined 2-stage pipeline RISC-V processor designed for low-power embedded applications running trusted firmware. This processor core supports ASIC and FPGA design flows and implements the RISC-V RV32E instruction set, complemented by optional standard extensions like integer multiplication and division. The core allows for specific instruction set extensions, such as for Digital Signal Processing (DSP) operations, enhancing its adaptability to various applications. Key features include a simple machine-mode architecture with direct physical memory access and the support for up to 20 extended interrupts. It has a clock-efficient 2-stage pipeline, optimized for speed with most instructions completing in a single cycle. For low-power operation, the design includes provisions for clock gating during idle states, making it ideal for energy-sensitive use cases. The processor interfaces with AHB-Lite or APB for extended memory and memory-mapped I/O. This core is highly configurable, adaptable for various embedded scenarios with support for RISC-V Privileged Architecture, and offers comprehensive memory interfaces to fit different design needs. IQonIC Works provides extensive tooling and support, including a compatible GNU tool chain and Eclipse development environment, ensuring developers can leverage this processor's full potential.
The SiFive Performance family of RISC-V processors targets maximum throughput and efficiency for environments including web servers and multimedia processing. These processors come in configurations ranging from three to six wide Out-of-Order (OoO) cores, with dedicated vector engines designed for AI workloads. By providing energy-efficiency without compromising on performance, the SiFive Performance Cores are tailored to meet the needs of diverse high-performance applications. These cores are scalable, offering configurations that can extend up to 256 cores. This scalability is essential for data centers and mobile infrastructures alike, where performance and efficiency are paramount. Key technical features include a six-wide out-of-order core architecture, RAS functionality, and a scalable core cluster. In datacenters and beyond, they facilitate the development of a diverse range of applications, including big data analytics and enterprise infrastructure solutions. SiFive's commitment to high-performance RISC-V processors caters to growing demands for performance and area-efficient application processors.
VisualSim Architect serves as a comprehensive platform for modeling and simulating electronic systems. It's designed to bridge the gap between system specifications, enhancing the model-based system engineering process. It provides crucial timing and power simulations that help identify potential bottlenecks early in development, thus optimizing system performance and functionality. Besides streamlining the specification process, it supports extensive use-case exploration and architecture optimization on various platforms. With its capacity to generate detailed reports on performance and power, engineers can gauge system efficiency accurately. VisualSim Architect's powerful capabilities significantly reduce the risk of design errors and expedite the development cycle. This tool is especially beneficial for semiconductor design, where precise performance and power predictions are crucial.
The AON1100 represents AONDevices' flagship in edge AI solutions aimed at voice and sensor applications. Its design philosophy centers on providing high accuracy combined with super low-power consumption. This chip shines when processing tasks such as voice commands, speaker identification, and sensor data integration. With a power rating of less than 260μW, the AON1100 maintains operational excellence even in environments with sub-0dB Signal-to-Noise Ratios. Its performance is highly appreciated in always-on devices, making it suitable for smart home applications, wearables, and automotive systems that demand real-time responsiveness and minimal energy draw. The AON1100 incorporates streamlined algorithms that enhance its sensor fusion capabilities, paving the way for smarter device contexts beyond traditional interactions. Its RISC-V support adds an additional layer of flexibility and compatibility with a wide range of applications, contributing significantly to the chip's adaptability and scalability across various domains.
The RV32IC_P5 is a 5-stage pipeline RISC-V processor core suited for medium-scale embedded applications requiring higher performance and cache capabilities. This core enhances system capability by providing features including 16-bit compressed instructions and options for standard extensions, such as support for machine-mode protected application execution. It comprises a robust architecture with machine, and user-mode functionalities, supporting up to 20 extended interrupts and optional user-mode extensions for handling exceptions. This design is aimed at enhancing multitasking and system performance, with an optional predictive branching system to reduce latency. With tight integration capabilities, it connects strategically to AHB-Lite interfaces and advanced memory mapped systems, offering support for virtual prototyping and extensive toolchain development environments, suitable for applications demanding rigorous performance enhancements within an embedded context.
The SAKURA-II AI Accelerator is a state-of-the-art energy-efficient AI processing unit designed to meet the demanding needs of generative AI applications. Boasting up to 60 TOPS performance, this accelerator is built on EdgeCortix's patented Dynamic Neural Accelerator architecture that allows high efficiency and low power consumption in edge AI tasks. It supports a wide array of models including Llama 2, Stable Diffusion, ViT, and large transformer and convolutional networks, typically operating within an 8W power envelope. Offering enhanced DRAM bandwidth and a maximum memory capacity of 32GB, SAKURA-II is optimized for real-time data streaming and efficient handling of large language models (LLMs) and vision-based AI workloads. This accelerator achieves exceptionally high AI compute utilization and provides robust energy-efficient performance, making it ideal for advanced applications in vision, language, audio, and more. The accelerator is available in modular M.2 and PCIe card formats, facilitating seamless integration into existing systems. These form factors ensure that the SAKURA-II can be effortlessly integrated into space-constrained or power-limited environments, providing the best choice for both development and deployment phases of edge AI initiatives.
The Tachyum Prodigy Universal Processor is a pioneering technology, serving as the world's first universal processor. It seamlessly integrates the capabilities of CPUs, GPGPUs, and TPUs into a single architectural framework. This innovation drastically enhances performance, energy efficiency, and space utilization, making it a favored choice for AI, high-performance computing, and hyperscale data centers. Designed to operate at lower costs than traditional processors, Prodigy provides a notable reduction in the total cost of ownership for data centers, while simultaneously delivering superior execution speed, surpassing conventional Xeon processors. The processor enables the transformation of data centers into integrated computational environments, supporting an array of applications from basic processing tasks to advanced AI training and inference. The Prodigy processor also supports a wide range of applications, allowing developers to run existing software without modifications. This flexibility is complemented by the ability to lower energy consumption significantly, making it an environmental ally by decreasing carbon emissions. The processor's design strategy sidesteps silicon underutilization, fostering an ecosystem for energy-efficient simulations and data analysis. This sustainability focus is vital as high-performance computing demands continue to escalate, making Prodigy a forward-thinking choice for enterprises focused on green computing solutions.
ASIC North offers a comprehensive suite of ARM Cortex-M® microcontrollers and processors, tailored for seamless integration into a variety of industry-specific systems. These ASICs are designed to deliver top-notch performance and efficiency, suitable for applications ranging from IoT to complex industrial systems. With a focus on providing reliable and scalable solutions, these ASICs cater to evolving technological needs while maintaining a high standard of quality and technical precision.<br><br>The ARM M-Class ASICs provide clients with the flexibility to adapt quickly to changing market demands. Leveraging years of expertise in semiconductor design, ASIC North ensures that these processors are optimized for a wide range of tasks, from enhancing communication protocols to facilitating advanced data processing functions. Continuous development and testing further ensure that these ASICs fulfill rigorous industry standards and client specifications.<br><br>By integrating these ARM M-Class ASICs, businesses gain access to next-generation capabilities that enhance overall operational efficiency. Whether deployed in robust industrial applications or cutting-edge consumer electronics, these ASICs are built to support dynamic performance needs, offering both reliability and adaptability across various sectors.
Dyumnin's RISCV SoC is built around a robust 64-bit quad-core server class RISC-V CPU, offering various subsystems that cater to AI/ML, automotive, multimedia, memory, and cryptographic needs. This SoC is notable for its AI accelerator, including a custom CPU and tensor flow unit designed to expedite AI tasks. Furthermore, the communication subsystem supports a wide array of protocols like PCIe, Ethernet, and USB, ensuring versatile connectivity. As for the automotive sector, it includes CAN and SafeSPI IPs, reinforcing its utility in diverse applications such as automotive systems.
IMEC's Monolithic Microsystems offer a leap forward in integrating complex electronic systems onto a single chip. These systems are developed to meet the growing demand for miniaturization, performance, and multifunctionality in electronics, particularly significant for industries looking to enhance device sophistication without expanding physical footprint. Monolithic Microsystems are pivotal in applications ranging from consumer electronics to industrial automation, where space efficiency and performance are key. The innovation lies in the seamless integration of multiple components into a single monolith, significantly reducing interconnections and enhancing signal integrity. This approach not only streamlines the manufacturing process but also boosts reliability and enhances the system’s overall performance. The small form factor and high functionality make it a preferred choice for developing smarter, interconnected devices across various high-tech sectors. These microsystems are specifically engineered to cater to advanced applications such as wearable technology, smart medical devices, and sophisticated sensors. By marrying advanced materials with innovative design paradigms, IMEC ensures that these microsystems can withstand challenging operating conditions, offering robustness and longevity. Further, the monolithic integration allows for new levels of device intelligence and integration, facilitating the growth of next-generation electronics tailored to specific industry needs.
The SFA 100 Edge IoT Data Processing solution offers streamlined data handling for IoT applications at the edge. It is optimized to efficiently process data collected from various IoT devices, ensuring quick and effective data transfer and analysis locally. This solution supports intelligent data processing near the source, which is critical for reducing latency and bandwidth usage in IoT networks.
Designed specifically for single channel advanced driver-assistance systems (ADAS), the SFA 250A focuses on providing reliable data processing for automotive applications. It processes a range of inputs from vehicle sensors, allowing precise and timely responses critical for safe driving. The solution integrates seamlessly with existing vehicular systems to enhance safety through intelligent data analysis and robust real-time processing capabilities.
The TSP1 Neural Network Accelerator from Applied Brain Research is a cutting-edge AI solution built to handle complex AI workloads with remarkable efficiency. Designed specifically for battery-powered devices, this chip excels in low-power operations while processing extensive neural network tasks. At its core, the TSP1 integrates state-of-the-art processing capabilities tailored for time series data, essential for applications like natural voice interfaces and bio-signal classification. This innovative chip is notable for its energy efficiency, consuming less than 10mW for complex AI tasks, making it an ideal solution for energy-conscious applications. Furthermore, it supports an array of sensor signal applications, ensuring versatile functionality across different domains including AR/VR, smart home automation, and medical wearables. By incorporating the Legendre Memory Unit, a proprietary advanced state-space neural network model, the TSP1 achieves superior data efficiency compared to traditional network architectures. ABR’s TSP1 stands out for its ability to perform powerful AI inferences with low latency, essential for real-time applications. It supports a wide range of interfaces, making it suitable for diverse integration scenarios, from voice recognition to industrial automation. Additionally, the chip's optimized hardware is key for algorithm scaling, facilitating smooth processing of larger neural models without compromising speed or performance.
TUNGA emerges as a revolutionary multi-core SoC integrating RISC-V cores with posit arithmetic capabilities. This solution is specifically architected for enhancing high-performance computing (HPC) and artificial intelligence workloads by leveraging the advantages of posit data types. As data centers struggle with the limitations of traditional number formats, TUNGA offers improved accuracy and efficiency, transforming real-number calculations with its innovative RISC-V foundation. This cutting-edge SoC includes the QUIRE accumulator, adept at executing precise dot products, crucial for delivering high-accuracy computations across extensive datasets. TUNGA's design incorporates reconfigurable FPGA gates, offering adaptability in critical function accelerations tailored for datacenter tasks. This adaptability extends to managing unique data types, thereby expediting AI training and inference. TUNGA stands out for its capability to streamline applications such as cryptography and AI support functions, making it a vital tool in pushing data center technologies to new horizons.
iCEVision is a powerful evaluation platform for the iCE40 UltraPlus FPGA, providing designers with the tools needed to rapid prototype and confirm connectivity features. The platform is compatible with most camera interfaces like ArduCam CSI and PMOD, allowing seamless integration into various imaging solutions. The board's exposed I/Os make it easy to implement and test user-defined functions, facilitating swift development from concept to production. It is equipped with a programmable SPI Flash and SRAM, supporting flexible programming using Lattice's Diamond Programmer and iCEcube2 software. With its small footprint and robust capabilities, iCEVision is ideal for developers looking to design customized projects. It's a go-to solution for rapid prototyping, ensuring efficient and effective deployment of visual processing applications in diverse arenas.