The CPU, or Central Processing Unit, is the central component of computer systems, acting as the brain that executes instructions and processes data. Our category of CPU semiconductor IPs offers a diverse selection of intellectual properties that enable the development of highly efficient and powerful processors for a wide array of applications, from consumer electronics to industrial systems. Semiconductor IPs in this category are designed to meet the needs of modern computing, offering adaptable and scalable solutions for different technology nodes and design requirements.
These CPU semiconductor IPs provide the core functionalities required for the development of processors capable of handling complex computations and multitasking operations. Whether you're developing systems for mobile devices, personal computers, or embedded systems, our IPs offer optimized solutions that cater to the varying demands of power consumption, processing speed, and operational efficiency. This ensures that you can deliver cutting-edge products that meet the market's evolving demands.
Within the CPU semiconductor IP category, you'll find a range of products including RISC (Reduced Instruction Set Computer) processors, multi-core processors, and customizable processor cores among others. Each product is designed to integrate seamlessly with other system components, offering enhanced compatibility and flexibility in system design. These IP solutions are developed with the latest architectural advancements and technological improvements to support next-generation computing needs.
Selecting the right CPU semiconductor IP is crucial for achieving target performance and efficiency in your applications. Our offerings are meticulously curated to provide comprehensive solutions that are robust, reliable, and capable of supporting diverse computing applications. Explore our CPU semiconductor IP portfolio to find the perfect components that will empower your innovative designs and propel your products into the forefront of technology.
Akida's Neural Processor IP represents a leap in AI architecture design, tailored to provide exceptional energy efficiency and processing speed for an array of edge computing tasks. At its core, the processor mimics the synaptic activity of the human brain, efficiently executing tasks that demand high-speed computation and minimal power usage. This processor is equipped with configurable neural nodes capable of supporting innovative AI frameworks such as convolutional and fully-connected neural network processes. Each node accommodates a range of MAC operations, enhancing scalability from basic to complex deployment requirements. This scalability enables the development of lightweight AI solutions suited for consumer electronics as well as robust systems for industrial use. Onboard features like event-based processing and low-latency data communication significantly decrease the strain on host processors, enabling faster and more autonomous system responses. Akida's versatile functionality and ability to learn on the fly make it a cornerstone for next-generation technology solutions that aim to blend cognitive computing with practical, real-world applications.
The KL730 is a third-generation AI chip that integrates advanced reconfigurable NPU architecture, delivering up to 8 TOPS of computing power. This cutting-edge technology enhances computational efficiency across a range of applications, including CNN and transformer networks, while minimizing DDR bandwidth requirements. The KL730 also boasts enhanced video processing capabilities, supporting 4K 60FPS outputs. With expertise spanning over a decade in ISP technology, the KL730 stands out with its noise reduction, wide dynamic range, fisheye correction, and low-light imaging performance. It caters to markets like intelligent security, autonomous vehicles, video conferencing, and industrial camera systems, among others.
The second-generation Akida platform builds upon the foundation of its predecessor with enhanced computational capabilities and increased flexibility for a broader range of AI and machine learning applications. This version supports 8-bit weights and activations in addition to the flexible 4- and 1-bit operations, making it a versatile solution for high-performance AI tasks. Akida 2 introduces support for programmable activation functions and skip connections, further enhancing the efficiency of neural network operations. These capabilities are particularly advantageous for implementing sophisticated machine learning models that require complex, interconnected processing layers. The platform also features support for Spatio-Temporal and Temporal Event-Based Neural Networks, advancing its application in real-time, on-device AI scenarios. Built as a silicon-proven, fully digital neuromorphic solution, Akida 2 is designed to integrate seamlessly with various microcontrollers and application processors. Its highly configurable architecture offers post-silicon flexibility, making it an ideal choice for developers looking to tailor AI processing to specific application needs. Whether for low-latency video processing, real-time sensor data analysis, or interactive voice recognition, Akida 2 provides a robust platform for next-generation AI developments.
The Metis AIPU PCIe AI Accelerator Card is engineered for developers demanding superior AI performance. With its quad-core Metis AIPU, this card delivers up to 214 TOPS, tackling challenging vision applications with unmatched efficiency. The PCIe card is designed with user-friendly integration in mind, featuring the Voyager SDK software stack that accelerates application deployment. Offering impressive processing speeds, the card supports up to 3,200 FPS for ResNet-50 models, providing a competitive edge for demanding AI tasks. Its design ensures it meets the needs of a wide array of AI applications, allowing for scalability and adaptability in various use cases.
The Akida IP is a groundbreaking neural processor designed to emulate the cognitive functions of the human brain within a compact and energy-efficient architecture. This processor is specifically built for edge computing applications, providing real-time AI processing for vision, audio, and sensor fusion tasks. The scalable neural fabric, ranging from 1 to 128 nodes, features on-chip learning capabilities, allowing devices to adapt and learn from new data with minimal external inputs, enhancing privacy and security by keeping data processing localized. Akida's unique design supports 4-, 2-, and 1-bit weight and activation operations, maximizing computational efficiency while minimizing power consumption. This flexibility in configuration, combined with a fully digital neuromorphic implementation, ensures a cost-effective and predictable design process. Akida is also equipped with event-based acceleration, drastically reducing the demands on the host CPU by facilitating efficient data handling and processing directly within the sensor network. Additionally, Akida's on-chip learning supports incremental learning techniques like one-shot and few-shot learning, making it ideal for applications that require quick adaptation to new data. These features collectively support a broad spectrum of intelligent computing tasks, including object detection and signal processing, all performed at the edge, thus eliminating the need for constant cloud connectivity.
The Yitian 710 Processor is a groundbreaking component in processor technology, designed with cutting-edge architecture to enhance computational efficiency. This processor is tailored for cloud-native environments, offering robust support for high-demand computing tasks. It is engineered to deliver significant improvements in performance, making it an ideal choice for data centers aiming to optimize their processing power and energy efficiency. With its advanced features, the Yitian 710 stands at the forefront of processor innovation, ensuring seamless integration with diverse technology platforms and enhancing the overall computing experience across industries.
The Veyron V2 CPU represents Ventana's second-generation RISC-V high-performance processor, designed for cloud, data center, edge, and automotive applications. This processor offers outstanding compute capabilities with its server-class architecture, optimized for handling complex, virtualized, and cloud-native workloads efficiently. The Veyron V2 is available as both IP for custom SoCs and as a complete silicon platform, ensuring flexibility for integration into various technological infrastructures. Emphasizing a modern architectural design, it includes full compliance with RISC-V RVA23 specifications, showcasing features like high Instruction Per Clock (IPC) and power-efficient architectures. Comprising of multiple core clusters, this CPU is capable of delivering superior AI and machine learning performance, significantly boosting throughput and energy efficiency. The Veyron V2's advanced fabric interconnects and extensive cache architecture provide the necessary infrastructure for high-performance applications, ensuring broad market adoption and versatile deployment options.
The Quadric Chimera General Purpose Neural Processing Unit (GPNPU) delivers unparalleled performance for AI workloads, characterized by its ability to handle diverse and complex tasks without requiring separate processors for different operations. Designed to unify AI inference and traditional computing processes, the GPNPU supports matrix, vector, and scalar tasks within a single, cohesive execution pipeline. This design not only simplifies the integration of AI capabilities into system-on-chip (SoC) architectures but also significantly boosts developer productivity by allowing them to focus on optimizing rather than partitioning code. The Chimera GPNPU is highly scalable, supporting a wide range of operations across all market segments, including automotive applications with its ASIL-ready versions. With a performance range from 1 to 864 TOPS, it excels in running the latest AI models, such as vision transformers and large language models, alongside classic network backbones. This flexibility ensures that devices powered by Chimera GPNPU can adapt to advancing AI trends, making them suitable for applications that require both immediate performance and long-term capability. A key feature of the Chimera GPNPU is its fully programmable nature, making it a future-proof solution for deploying cutting-edge AI models. Unlike traditional NPUs that rely on hardwired operations, the Chimera GPNPU uses a software-driven approach with its source RTL form, making it a versatile option for inference in mobile, automotive, and edge computing applications. This programmability allows for easy updating and adaptation to new AI model operators, maximizing the lifespan and relevance of chips that utilize this technology.
Leveraging a high-performance RISC architecture, the eSi-3250 32-bit core efficiently integrates instruction and data caches. This makes it compatible with designs utilizing slower on-chip memories such as eFlash. The core not only supports MMU for address translation but also allows for user-defined custom instructions, greatly enhancing its flexibility for specialized and high-performance applications.
The xcore.ai platform by XMOS is a versatile, high-performance microcontroller designed for the integration of AI, DSP, and real-time I/O processing. Focusing on bringing intelligence to the edge, this platform facilitates the construction of entire DSP systems using software without the need for multiple discrete chips. Its architecture is optimized for low-latency operation, making it suitable for diverse applications from consumer electronics to industrial automation. This platform offers a robust set of features conducive to sophisticated computational tasks, including support for AI workloads and enhanced control logic. The xcore.ai platform streamlines development processes by providing a cohesive environment that blends DSP capabilities with AI processing, enabling developers to realize complex applications with greater efficiency. By doing so, it reduces the complexity typically associated with chip integration in advanced systems. Designed for flexibility, xcore.ai supports a wide array of applications across various markets. Its ability to handle audio, voice, and general-purpose processing makes it an essential building block for smart consumer devices, industrial control systems, and AI-powered solutions. Coupled with comprehensive software support and development tools, the xcore.ai ensures a seamless integration path for developers aiming to push the boundaries of AI-enabled technologies.
The Metis AIPU M.2 Accelerator Module is designed for devices that require high-performance AI inference in a compact form factor. Powered by a quad-core Metis AI Processing Unit (AIPU), this module optimizes power consumption and integration, making it ideal for AI-driven applications. With a dedicated memory of 1 GB DRAM, it enhances the capabilities of vision processing systems, providing significant boosts in performance for devices with Next Generation Form Factor (NGFF) M.2 sockets. Ideal for use in computer vision systems and more, it offers hassle-free integration and evaluation with Axelera's Voyager SDK. This accelerator module is tailored for any application seeking to harness the power of AI processing efficiently. The Metis AIPU M.2 Module streamlines the deployment of AI applications, ensuring high performance with reduced power consumption.
The Talamo Software Development Kit (SDK) is an advanced solution from Innatera designed to expedite the development of neuromorphic AI applications. It integrates seamlessly with PyTorch, providing developers with a familiar environment to build and extend AI models specifically for spiking neural processors. By enhancing the standard PyTorch workflow, Talamo simplifies the complexity associated with constructing spiking neural networks, allowing a broader range of developers to create sophisticated AI solutions without requiring deep expertise in neuromorphic computing. Talamo's capabilities include automatic mapping of trained models onto Innatera's heterogeneous computing architecture, coupled with a robust architecture simulator for efficient validation and iteration. This means developers can iterate quickly and efficiently, optimizing their applications for performance and power without extensive upfront reconfiguration or capital layout. The SDK supports the creation of collaborative application pipelines that merge signal processing with AI, supporting custom functions and neural network implementation. This gives developers the flexibility to tailor solutions to specific needs, be it in audio processing, gesture recognition, or environmental sensing. Through its comprehensive toolkit, Talamo SDK empowers users to translate conceptual models into high-performing AI applications that leverage the unique processing strengths of spiking neural networks, ultimately lowering barriers to innovation in low-power, edge-based AI.
The aiWare NPU (Neural Processing Unit) by aiMotive is a high-performance hardware solution tailored specifically for automotive AI applications. It is engineered to accelerate inference tasks for autonomous driving systems, ensuring excellent performance across a variety of neural network workloads. aiWare delivers significant flexibility and efficiency, capable of scaling from basic Level 2 applications to complex multi-sensor Level 3+ systems. Achieving up to 98% efficiency, aiWare's design focuses on minimizing power utilization while maximizing core performance. It supports a broad spectrum of neural network architectures, including convolutional neural networks, transformers, and recurrent networks, making it suitable for diverse AI tasks in the automotive sphere. The NPU's architecture allows for minimal external memory access, thanks to its highly efficient dataflow design that capitalizes on on-chip memory caching. With a robust toolkit known as aiWare Studio, engineers can efficiently optimize neural networks without in-depth knowledge of low-level programming, streamlining development and integration efforts. The aiWare hardware is also compatible with V2X communication and advanced driver assistance systems, adapting to various operational needs with great dexterity. Its comprehensive support for automotive safety standards further cements its reputation as a reliable choice for integrating artificial intelligence into next-generation vehicles.
The SAKURA-II is a cutting-edge AI accelerator that combines high performance with low power consumption, designed to efficiently handle multi-billion parameter models for generative AI applications. It is particularly suited for tasks that demand real-time AI inferencing with minimal batch processing, making it ideal for applications devoted to edge environments. With a typical power usage of 8 watts and a compact footprint, the SAKURA-II achieves more than twice the AI compute efficiency of comparable solutions. This AI accelerator supports next-generation applications by providing up to 4x more DRAM bandwidth compared to alternatives, crucial for the processing of complex vision tasks and large language models (LLMs). The hardware offers advanced precision through software-enabled mixed-precision, which achieves near FP32 accuracy, while a unique sparse computing feature optimizes memory usage. Its robust memory architecture backs up to 32GB of DRAM, providing ample capacity for intensive AI workloads. The SAKURA-II's modular design allows it to be used in multiple form factors, addressing the diverse needs of modern computing tasks such as those found in smart cities, autonomous robotics, and smart manufacturing. Its adaptability is further enhanced by runtime configurable data paths, allowing the device to optimize task scheduling and resource allocation dynamically. These features are powered by the Dynamic Neural Accelerator engine, ensuring efficient computation and energy management.
The RV12 RISC-V Processor is a highly configurable, single-core CPU that adheres to RV32I and RV64I standards. It’s engineered for the embedded market, offering a robust structure based on the RISC-V instruction set. The processor's architecture allows simultaneous instruction and data memory accesses, lending itself to a broad range of applications and maintaining high operational efficiency. This flexibility makes it an ideal choice for diverse execution requirements, supporting efficient data processing through an optimized CPU framework. Known for its adaptability, the RV12 processor can support multiple configurations to suit various application demands. It is capable of providing the necessary processing power for embedded systems, boasting a reputation for stability and reliability. This processor becomes integral for designs that require a maintainability of performance without compromising on the configurability aspect, meeting the rigorous needs of modern embedded computing. The processor's support of the open RISC-V architecture ensures its capability to integrate into existing systems seamlessly. It lends itself well to both industrial and academic applications, offering a resource-efficient platform that developers and researchers can easily access and utilize.
The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.
The KL630 is a pioneering AI chipset featuring Kneron's latest NPU architecture, which is the first to support Int4 precision and transformer networks. This cutting-edge design ensures exceptional compute efficiency with minimal energy consumption, making it ideal for a wide array of applications. With an ARM Cortex A5 CPU at its core, the KL630 excels in computation while maintaining low energy expenditure. This SOC is designed to handle both high and low light conditions optimally and is perfectly suited for use in diverse edge AI devices, from security systems to expansive city and automotive networks.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.
The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.
Micro Magic's Ultra-Low-Power 64-Bit RISC-V Core is engineered for superior energy efficiency, consuming a mere 10mW while operating at a clock speed of 1GHz. This processor core is designed to excel under low voltage conditions, delivering high performance without compromising on power conservation. It is ideal for applications requiring prolonged battery life or those operating in energy-sensitive environments. This processor integrates Micro Magic's advanced design methodologies, allowing for operation at frequencies up to 5GHz when necessary. The RISC-V Core capabilities are enhanced by solid construction, ensuring reliability and robust performance across various use-cases, making it an adaptable solution for modern electronic designs. With its cutting-edge design, this RISC-V core supports rapid deployment in numerous applications, especially in areas demanding high computational power alongside reduced energy usage. Micro Magic's advanced techniques ensure that this core is not only fast but also supports scalable integration into various systems with ease.
The KL530 represents a significant advancement in AI chip technology with a new NPU architecture optimized for both INT4 precision and transformer networks. This SOC is engineered to provide high processing efficiency and low power consumption, making it suitable for AIoT applications and other innovative scenarios. It features an ARM Cortex M4 CPU designed for low-power operation and offers a robust computational power of up to 1 TOPS. The chip's ISP enhances image quality, while its codec ensures efficient multimedia compression. Notably, the chip's cold start time is under 500 ms with an average power draw of less than 500 mW, establishing it as a leader in energy efficiency.
The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.
The KL520 marks Kneron's foray into the edge AI landscape, offering an impressive combination of size, power efficiency, and performance. Armed with dual ARM Cortex M4 processors, this chip can operate independently or as a co-processor to enable AI functionalities such as smart locks and security monitoring. The KL520 is adept at 3D sensor integration, making it an excellent choice for applications in smart home ecosystems. Its compact design allows devices powered by it to operate on minimal power, such as running on AA batteries for extended periods, showcasing its exceptional power management capabilities.
Crafted to deliver significant power savings, the Tianqiao-70 is a low-power RISC-V CPU that excels in commercial-grade scenarios. This 64-bit CPU core is primarily designed for applications where power efficiency is critical, such as mobile devices and computationally intensive IoT solutions. The core's architecture is specifically optimized to perform under stringent power budgets without compromising on the processing power needed for complex tasks. It provides an efficient solution for scenarios that demand reliable performance while maintaining a low energy footprint. Through its refined design, the Tianqiao-70 supports a broad spectrum of applications, including personal computing, machine learning, and mobile communications. Its versatility and power-awareness make it a preferred choice for developers focused on sustainable and scalable computing architectures.
The Hanguang 800 AI Accelerator is a high-performance AI processor developed to meet the complex demands of artificial intelligence workloads. This accelerator is engineered with cutting-edge AI processing capabilities, enabling rapid data analysis and machine learning model inference. Designed for flexibility, the Hanguang 800 delivers superior computation speed and energy efficiency, making it an optimal choice for AI applications in a variety of sectors, from data centers to edge computing. By supporting high-volume data throughput, it enables organizations to achieve significant advantages in speed and efficiency, facilitating the deployment of intelligent solutions.
AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.
The NMP-750 is AiM Future's powerful edge computing accelerator designed specifically for high-performance tasks. With up to 16 TOPS of computational throughput, this accelerator is perfect for automotive, AMRs, UAVs, as well as AR/VR applications. Fitted with up to 16 MB of local memory and featuring RISC-V or Arm Cortex-R/A 32-bit CPUs, it supports diverse data processing requirements crucial for modern technological solutions. The versatility of the NMP-750 is displayed in its ability to manage complex processes such as multi-camera stream processing and spectral efficiency management. It is also an apt choice for applications that require energy management and building automation, demonstrating exceptional potential in smart city and industrial setups. With its robust architecture, the NMP-750 ensures seamless integration into systems that need to handle large data volumes and support high-speed data transmission. This makes it ideal for applications in telecommunications and security where infrastructure resilience is paramount.
The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.
CrossBar's ReRAM Memory brings a revolutionary shift in the non-volatile memory sector, designed with a straightforward yet efficient three-layer structure. Comprising a top electrode, a switching medium, and a bottom electrode, ReRAM holds vast potential as a multiple-time programmable memory solution. Leveraging the resistive switching mechanism, the technology excels in meter-scale data storage applications, integrating seamlessly into AI-driven, IoT, and secure computing realities. The patented ReRAM technology is distinguished by its ability to perform at peak efficiency with notable read and write speeds, making it a suitable candidate for future-facing chip architectures that require swift, wide-ranging memory capabilities. Unprecedented in its energy-saving capabilities, CrossBar's ReRAM slashes energy consumption by up to 5 times compared to eFlash and offers substantial improvements over NAND and SPI Flash memories. Coupled with exceptional read latencies of around 20 nanoseconds and write times of approximately 12 microseconds, the memory technology outperforms existing solutions, enhancing system responsiveness and user experiences. Its high-density memory configurations provide terabyte-scale storage with minimal physical footprint, ensuring effective integration into cutting-edge devices and systems. Moreover, ReRAM's design permits its use within traditional CMOS manufacturing processes, enabling scalable, stackable arrays. This adaptability ensures that suppliers can integrate these memory solutions at various stages of semiconductor production, from standalone memory chips to embedded roles within complex system-on-chip designs. The inherent simplicity, combined with remarkable performance characteristics, positions ReRAM Memory as a key player in the advancement of secure, high-density computing.
The N Class RISC-V CPU IP from Nuclei is tailored for applications where space efficiency and power conservation are paramount. It features a 32-bit architecture and is highly suited for microcontroller applications within the AIoT realm. The N Class processors are crafted to provide robust processing capabilities while maintaining a minimal footprint, making them ideal candidates for devices that require efficient power management and secure operations. By adhering to the open RISC-V standard, Nuclei ensures that these processors can be seamlessly integrated into various solutions, offering customizable options to fit specific system requirements.
The AHB-Lite Timer module designed by Roa Logic is compliant with the RISC-V Privileged 1.9.1 specification, offering a versatile timing solution for embedded applications. As an integral peripheral, it provides precise timing functionalities, enabling applications to perform scheduled operations accurately. Its parameterized design allows developers to adjust the timer's features to match the needs of their system effectively. This timer module supports a broad scope of timing tasks, ranging from simple delay setups to complex timing sequences, making it ideal for various embedded system requirements. The flexibility in its design ensures straightforward implementation, reducing complexity and enhancing the overall performance of the target application. With RISC-V compliance at its core, the AHB-Lite Timer ensures synchronization and precision in signal delivery, crucial for systems tasked with critical timing operations. Its adaptable architecture and dependable functionality make it an exemplary choice for projects where timing accuracy is required.
RapidGPT is a next-generation electronic design automation tool powered by AI. Designed for those in the hardware engineering field, it allows for a seamless transition from ideas to physical hardware without the usual complexities of traditional design tools. The interface is highly intuitive, engaging users with natural language interaction to enhance productivity and reduce the time required for design iterations.\n\nEnhancing the entire design process, RapidGPT begins with concept development and guides users through to the final stages of bitstream or GDSII generation. This tool effectively acts as a co-pilot for engineers, allowing them to easily incorporate third-party IPs, making it adaptable for various project requirements. This adaptability is paramount for industries where speed and precision are of the essence.\n\nPrimisAI has integrated novel features such as AutoReviewâ„¢, which provides automated HDL audits; AutoCommentâ„¢, which generates AI-driven comments for HDL files; and AutoDocâ„¢, which helps create comprehensive project documentation effortlessly. These features collectively make RapidGPT not only a design tool but also a comprehensive project management suite.\n\nThe effectiveness of RapidGPT is made evident in its robust support for various design complexities, providing a scalable solution that meets specific user demands from individual developers to large engineering teams seeking enterprise-grade capabilities.
RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.
Syntacore's SCR3 microcontroller core is a versatile option for developers looking to harness the power of a 5-stage in-order pipeline. Designed to support both 32-bit and 64-bit symmetric multiprocessing (SMP) configurations, this core is perfectly aligned with the needs of embedded applications requiring moderate power and resource efficiency coupled with enhanced processing capabilities. The architecture is fine-tuned to handle a variety of workloads, ensuring a balance between performance and power usage, making it suitable for sectors such as industrial automation, automotive sensors, and IoT devices. The inclusion of privilege modes, memory protection units (MPUs), and cache systems further enhances its capabilities, particularly in environments where system security and reliability are paramount. Developers will find the SCR3 core to be highly adaptable, fitting seamlessly into designs that need scalability and modularity. Syntacore's comprehensive toolkit, combined with detailed documentation, ensures that system integration is both quick and reliable, providing a robust foundation for varied applications.
The eSi-1650 is a compact, low-power 16-bit CPU core integrating an instruction cache, making it an ideal choice for mature process nodes reliant on OTP or Flash program memory. By omitting large on-chip RAMs, the IP core optimizes power and area efficiency and permits the CPU to capitalize on its maximum operational frequency beyond OTP/Flash constraints.
The RISC-V Core IP developed by AheadComputing Inc. stands out in the field of 64-bit application processors. Designed to deliver exceptional per-core performance, this processor is engineered with the highest standards to maximize the Instructions Per Cycle (IPC) efficiency. AheadComputing's RISC-V Core IP is continuously refined to address the growing demands of high-performance computing applications. The innovative architecture of this core allows for seamless execution of complex algorithms while achieving superior speed and efficiency. This design is crucial for applications that require fast data processing and real-time computational capabilities. By integrating advanced power management techniques, the RISC-V Core IP ensures energy efficiency without sacrificing performance, making it suitable for a wide range of electronic devices. Anticipating future computing needs, AheadComputing's RISC-V Core IP incorporates state-of-the-art features that support scalability and adaptability. These features ensure that the IP remains relevant as technology evolves, providing a solid foundation for developing next-generation computing solutions. Overall, it embodies AheadComputing’s commitment to innovation and performance excellence.
The Codasip RISC-V BK Core Series is designed to offer flexible and high-performance core options catering to a wide range of applications, from low-power tasks to intricate computational needs. This series achieves optimal balance in power consumption and processing speed, making it suitable for applications demanding energy efficiency without compromising performance. These cores are fully RISC-V compliant, allowing for easy customizations to suit specific needs by modifying the processor's architecture or instruction set through Codasip Studio. The BK Core Series provides a streamlining process for developing precise computing solutions, ideal for IoT edge devices and sensor controllers where both small area and low power are critical. Moreover, the BK Core Series supports architectural exploration, enabling users to optimize the core design specifically tailored to their applications. This capability ensures that each core delivers the expected power, efficiency, and performance metrics required by modern technological solutions.
The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.
The KL720 AI SoC is designed for optimal performance-to-power ratios, achieving 0.9 TOPS per watt. This makes it one of the most efficient chips available for edge AI applications. The SOC is crafted to meet high processing demands, suitable for high-end devices including smart TVs, AI glasses, and advanced cameras. With an ARM Cortex M4 CPU, it enables superior 4K imaging, full HD video processing, and advanced 3D sensing capabilities. The KL720 also supports natural language processing (NLP), making it ideal for emerging AI interfaces such as AI assistants and gaming gesture controls.
The Spiking Neural Processor T1 is designed as a highly efficient microcontroller that integrates neuromorphic intelligence closely with sensors. It employs a unique spiking neural network engine paired with a nimble RISC-V processor core, forming a cohesive unit for advanced data processing. With this setup, the T1 excels in delivering next-gen AI capabilities embedded directly at the sensor, operating within an exceptionally low power consumption range, ideal for battery-dependent and latency-sensitive applications. This processor marks a notable advancement in neuromorphic technology, allowing for real-time pattern recognition with minimal power draw. It supports various interfaces like QSPI, I2C, and UART, fitting into a compact 2.16mm x 3mm package, which facilitates easy integration into diverse electronic devices. Additionally, its architecture is designed to process different neural network models efficiently, from spiking to deep neural networks, providing versatility across applications. The T1 Evaluation Kit furthers this ease of adoption by enabling developers to use the Talamo SDK to create or deploy applications readily. It includes tools for performance profiling and supports numerous common sensors, making it a strong candidate for projects aiming to leverage low-power, intelligent processing capabilities. This innovative chip's ability to manage power efficiency with high-speed pattern processing makes it especially suitable for advanced sensing tasks found in wearables, smart home devices, and more.
Bluespec's Portable RISC-V Cores are designed to bring flexibility and extended functionality to FPGA platforms such as Achronix, Xilinx, Lattice, and Microsemi. They offer support for operating systems like Linux and FreeRTOS, making them versatile for various applications. These cores are accompanied by standard open-source development tools, which facilitate seamless integration and development processes. By utilizing these tools, developers can modify and enhance the cores to suit their specific needs, ensuring a custom fit for their projects. The portable cores are an excellent choice for developers looking to deploy RISC-V architecture across different FPGA platforms without being tied down to proprietary solutions. With Bluespec's focus on open-source, users can experience freedom in innovation and development without sacrificing performance or compatibility.
The eSi-1600 is a 16-bit CPU core designed for cost-sensitive and power-efficient applications. It accords performance levels similar to that of 32-bit CPUs while maintaining a system cost comparable to 8-bit processors. This IP is particularly well-suited for control applications needing limited memory resources, demonstrating excellent compatibility with mature mixed-signal technologies.
The Chipchain C100 is a pioneering solution in IoT applications, providing a highly integrated single-chip design that focuses on low power consumption without compromising performance. Its design incorporates a powerful 32-bit RISC-V CPU which can reach speeds up to 1.5GHz. This processing power ensures efficient and capable computing for diverse IoT applications. This chip stands out with its comprehensive integrated features including embedded RAM and ROM, making it efficient in both processing and computing tasks. Additionally, the C100 comes with integrated Wi-Fi and multiple interfaces for transmission, broadening its application potential significantly. Other notable features of the C100 include an ADC, LDO, and a temperature sensor, enabling it to handle a wide array of IoT tasks more seamlessly. With considerations for security and stability, the Chipchain C100 facilitates easier and faster development in IoT applications, proving itself as a versatile component in smart devices like security systems, home automation products, and wearable technology.
The RayCore MC is a revolutionary real-time path and ray-tracing GPU designed to enhance rendering with minimal power consumption. This GPU IP is tailored for real-time applications, offering a rich graphical experience without compromising on speed or efficiency. By utilizing advanced ray-tracing capabilities, RayCore MC provides stunning visual effects and lifelike animations, setting a high standard for quality in digital graphics. Engineered for scalability and performance, RayCore MC stands out in the crowded field of GPU technologies by delivering seamless, low-latency graphics. It is particularly suited for applications in gaming, virtual reality, and the burgeoning metaverse, where realistic rendering is paramount. The architecture supports efficient data management, ensuring that even the most complex visual tasks are handled with ease. RayCore MC's architecture supports a wide array of applications beyond entertainment, making it a vital tool in areas such as autonomous vehicles and data-driven industries. Its blend of power efficiency and graphical prowess ensures that developers can rely on RayCore MC for cutting-edge, resource-light graphic solutions.
Designed for exceptional performance in demanding environments, the SCR6 microcontroller core integrates advanced processing capabilities with power efficiency. Featuring a 12-stage, dual-issue out-of-order pipeline and a high-performance floating-point unit (FPU), it excels in managing computationally intensive tasks with finesse and speed, making it a prime candidate for next-gen microcontroller applications. With a focus on high bandwidth and efficient throughput, the SCR6 supports scalable deployments thanks to its symmetrical multiprocessing (SMP) configurations. This design enables usage in sectors where swift processing and reliability are crucial, such as real-time industrial automation, automotive systems, and IoT platforms. Syntacore’s SCR6 benefits from a well-rounded development environment, offering support that ensures high compatibility with a variety of platforms and applications. The core exemplifies Syntacore's commitment to providing innovative solutions that embody both the potential and flexibility of the RISC-V architecture.
The eSi-3200, a 32-bit cacheless core, is tailored for embedded control with its expansive and configurable instruction set. Its capabilities, such as 64-bit multiply-accumulate operations and fixed-point complex multiplications, cater effectively to signal processing tasks like FFTs and FIRs. Additionally, it supports SIMD and single-precision floating point operations, coupled with efficient power management features, enhancing its utility for diverse embedded applications.
The NMP-350 is an endpoint accelerator designed to deliver the lowest power and cost efficiency in its class. Ideal for applications such as driver authentication and health monitoring, it excels in automotive, AIoT/sensors, and wearable markets. The NMP-350 offers up to 1 TOPS performance with 1 MB of local memory, and is equipped with a RISC-V or Arm Cortex-M 32-bit CPU. It supports multiple use-cases, providing exceptional value for integrating AI capabilities into various devices. NMP-350's architectural design ensures optimal energy consumption, making it particularly suited to Industry 4.0 applications where predictive maintenance is crucial. Its compact nature allows for seamless integration into systems requiring minimal footprint yet substantial computational power. With support for multiple data inputs through AXI4 interfaces, this accelerator facilitates enhanced machine automation and intelligent data processing. This product is a testament to AiM Future's expertise in creating efficient AI solutions, providing the building blocks for smart devices that need to manage resources effectively. The combination of high performance with low energy requirements makes it a go-to choice for developers in the field of AI-enabled consumer technology.
Ncore Cache Coherent Interconnect is designed to tackle the multifaceted challenges in multicore SoC systems by introducing heterogeneous coherence and efficient cache management. This NoC IP optimizes performance by ensuring high throughput and reliable data transmission across multiple cores, making it indispensable for sophisticated computing tasks. Leveraging advanced cache coherency, Ncore maintains data integrity, crucial for maintaining system stability and efficiency in operations involving heavy computational loads. With its ISO26262 support, it caters to automotive and industrial applications requiring high reliability and safety standards. This interconnect technology pairs well with diverse processor architectures and supports an array of protocols, providing seamless integration into existing systems. It enables a coherent and connected multicore environment, enhancing the performance of high-stakes applications across various industry verticals, from automotive to advanced computing environments.
The SCR1 microcontroller core from Syntacore is an open-source, compact core tailored for deeply embedded applications. It features a straightforward 4-stage in-order pipeline, making it ideally suited for smaller, power-constrained devices where performance needs to be finely balanced with energy consumption. This core is particularly valuable in applications requiring a high degree of customization and flexibility. With a unique combination of low area footprint and efficiency, the SCR1 is a pivotal tool for developers involved in creating optimized, scalable systems, particularly in the fields of sensory data processing, IoT, and control systems. Its design architecture ensures that it can efficiently handle the demands of modern consumer electronics and other compact embedded devices. The SCR1 supports a rich ecosystem of development tools provided by Syntacore, ensuring that integration into various platforms is seamless. Syntacore's commitment to open-source development allows for a wide adoption of their core among a diverse range of projects and initiatives, enhancing the potential of the RISC-V architecture in global markets.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
No credit card or payment details required.
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!