All IPs > Processor
The 'Processor' category in the Silicon Hub Semiconductor IP catalog is a cornerstone of modern electronic device design. Processor semiconductor IPs serve as the brain of electronic devices, driving operations, processing data, and performing complex computations essential for a multitude of applications. These IPs include a wide variety of specific types such as CPUs, DSP cores, and microcontrollers, each designed with unique capabilities and applications in mind.
In this category, you'll find building blocks, which are fundamental components for constructing more sophisticated processors, and coprocessors that augment the capabilities of a main processor, enabling efficient handling of specialized tasks. The versatility of processor semiconductor IPs is evident in subcategories like AI processors, audio processors, and vision processors, each tailored to meet the demands of today’s smart technologies. These processors are central to developing innovative products that leverage artificial intelligence, enhance audio experiences, and enable complex image processing capabilities, respectively.
Moreover, there are security processors that empower devices with robust security features to protect sensitive data and communications, as well as IoT processors and wireless processors that drive connectivity and integration of devices within the Internet of Things ecosystem. These processors ensure reliable and efficient data processing in increasingly connected and smart environments.
Overall, the processor semiconductor IP category is pivotal for enabling the creation of advanced electronic devices across a wide range of industries, from consumer electronics to automotive systems, providing the essential processing capabilities needed to meet the ever-evolving technological demands of today's world. Whether you're looking for individual processor cores or fully integrated processing solutions, this category offers a comprehensive selection to support any design or application requirement.
The Akida Neural Processor is a sophisticated AI processing unit designed to handle complex neural network tasks with unmatched precision and efficiency. Utilizing an event-based processing model, Akida exploits data sparsity to minimize operations and hence decrease power usage significantly while enhancing throughput. This processor is built around a mesh network interconnect, with each node equipped with configurable Neural Network Engines that can handle convolutional and fully connected neural networks. With these capabilities, Akida can process data at the edge, maintaining high-speed, low-latency responses ideal for real-time applications. Akida maintains seamless functionality in diverse use cases, from predictive maintenance to streaming analytics in sensors. By supporting on-chip learning and providing strong privacy controls, this processor ensures data security by reducing cloud data exchanges, making it a trusted component for sensitive applications.
The 2nd Generation Akida processor introduces groundbreaking enhancements to BrainChip's neuromorphic processing platform, particularly ideal for intricate network models. It integrates eight-bit weight and activation support, improving energy efficiency and computational performance without enlarging model size. By supporting an extensive application set, Akida 2nd Generation addresses diverse Edge AI needs untethered from cloud dependencies. Notably, Akida 2nd Generation incorporates Temporal Event-Based Neural Nets (TENNs) and Vision Transformers, facilitating robust tracking through high-speed vision and audio processing. Its built-in support for on-chip learning further optimizes AI efficiency by reducing reliance on cloud training. This versatile processor fits perfectly for spatio-temporal applications across industrial, automotive, and healthcare sectors. Developers gain from its Configurable IP Platform, which allows seamless scalability across multiple use cases. The Akida ecosystem, including MetaTF, offers developers a strong foundation for integrating cutting-edge AI capabilities into Edge systems, ensuring secure and private data processing.
Speedcore embedded FPGA (eFPGA) IP represents a notable advancement in integrating programmable logic into ASICs and SoCs. Unlike standalone FPGAs, eFPGA IP lets designers tailor the exact dimensions of logic, DSP, and memory needed for their applications, making it an ideal choice for areas like AI, ML, 5G wireless, and more. Speedcore eFPGA can significantly reduce system costs, power requirements, and board space while maintaining flexibility by embedding only the necessary features into production. This IP is programmable using the same Achronix Tool Suite employed for standalone FPGAs. The Speedcore design process is supported by comprehensive resources and guidance, ensuring efficient integration into various semiconductor projects.
The Speedster7t FPGA family is crafted for high-bandwidth tasks, tackling the usual restrictions seen in conventional FPGAs. Manufactured using the TSMC 7nm FinFET process, these FPGAs are equipped with a pioneering 2D network-on-chip architecture and a series of machine learning processors for optimal high-bandwidth performance and AI/ML workloads. They integrate interfaces for high-paced GDDR6 memory, 400G Ethernet, and PCI Express Gen5 ports. This 2D network-on-chip connects various interfaces to upward of 80 access points in the FPGA fabric, enabling ASIC-like performance, yet retaining complete programmability. The product encourages users to start with the VectorPath accelerator card which houses the Speedster7t FPGA. This family offers robust tools for applications such as 5G infrastructure, computational storage, and test and measurement.
The NMP-750 is designed as a cutting-edge performance accelerator for edge computing, tailored to address challenges in sectors like automotive, telecommunications, and smart factories. This product offers ample support for mobility, autonomous control, and process automation, setting a benchmark in high-performance computing for varied applications. With a processing power of up to 16 TOPS and 16 MB of local memory, it supports RISC-V/Arm Cortex-R or A 32-bit CPUs for substantial computational tasks. Its architecture supports a rich set of applications, including multi-camera stream processing and energy management, enabled through its AXI4 128-bit interfaces that manage extensive data traffic efficiently. This accelerator is particularly suited for complex scenarios such as spectral efficiency and smart building management, offering unparalleled performance capabilities. Designed for scalability and reliability, the NMP-750 reaches beyond traditional computing barriers, ensuring outstanding performance in real-time applications and next-gen technology deployments.
The KL730 AI SoC is equipped with a state-of-the-art third-generation reconfigurable NPU architecture, delivering up to 8 TOPS of computational power. This innovative architecture enhances computational efficiency, particularly with the latest CNN networks and transformer applications, while reducing DDR bandwidth demands. The KL730 excels in video processing, offering support for 4K 60FPS output and boasts capabilities like noise reduction, wide dynamic range, and low-light imaging. It is ideal for applications such as intelligent security, autonomous driving, and video conferencing.
The NMP-350 is specifically designed to serve as a cost-effective endpoint accelerator with a strong emphasis on low power consumption, making it ideal for various applications in AIoT, automotive, and smart appliances. This product is equipped with a robust architecture to facilitate myriad applications, such as driver authentication, digital mirrors, and predictive maintenance, while ensuring efficient resource management. Capable of delivering up to 1 TOPS, the NMP-350 integrates up to 1 MB of local memory, supporting RISC-V/Arm Cortex-M 32-bit CPU cores. It utilizes a triple AXI4 interface, each with a capacity of 128 bits, to manage host, CPU, and data traffic seamlessly. This architecture supports a host of applications in wearables, Industry 4.0, and health monitoring, adding significant value to futuristic technology solutions. Strategically targeting markets like AIoT/sensors and smart appliances, the NMP-350 positions itself as a favored choice for developing low-cost, power-sensitive device solutions. As industries gravitate toward energy-efficient technologies, products like NMP-350 offer a competitive edge in facilitating smart, green development processes.
Designed for high-performance applications, the Metis AIPU PCIe AI Accelerator Card employs four Metis AI Processing Units to deliver exceptional computational power. With its ability to reach up to 856 TOPS, this card is tailored for demanding vision applications, making it suitable for real-time processing of multi-channel video data. The PCIe form factor ensures easy integration into existing systems, while the customized software platform simplifies the deployment of neural networks for tasks like YOLO object detection. This accelerator card ensures scalability and efficiency, allowing developers to implement AI applications that are both powerful and cost-effective. The card’s architecture also takes advantage of RISC-V and Digital-In-Memory Computing technologies, bringing substantial improvements in speed and power efficiency.
The Origin E1 neural engines by Expedera redefine efficiency and customization for low-power AI solutions. Specially crafted for edge devices like home appliances and security cameras, these engines serve ultra-low power applications that demand continuous sensing capabilities. They minimize power consumption to as low as 10-20mW, keeping data secure and eliminating the need for external memory access. The advanced packet-based architecture enhances performance by facilitating parallel layer execution, thereby optimizing resource utilization. Designed to be a perfect fit for dedicated AI functions, Origin E1 is tailored to support specific neural networks efficiently while reducing silicon area and system costs. It supports various neural networks, from CNNs to RNNs, making it versatile for numerous applications. This engine is also one of the most power-efficient in the industry, boasting an impressive 18 TOPS per Watt. Origin E1 also offers a full TVM-based software stack for easy integration and performance optimization across customer platforms. It supports a wide array of data types and networks, ensuring flexibility and sustained power efficiency, averaging 80% utilization. This makes it a reliable choice for OEMs looking for high performance in always-sensing applications, offering a competitive edge in both power efficiency and security.
The ORC3990 SoC is a state-of-the-art solution designed for satellite IoT applications within Totum's DMSS™ network. This low-power sensor-to-satellite system integrates an RF transceiver, ARM CPUs, memories, and PA to offer seamless IoT connectivity via LEO satellite networks. It boasts an optimized link budget for effective indoor signal coverage, eliminating the need for additional GNSS components. This compact SoC supports industrial temperature ranges and is engineered for a 10+ year battery life using advanced power management.
Spec-TRACER is a robust requirements lifecycle management platform tailored for FPGA and ASIC projects. Focusing on facilitating seamless requirements capture, management, and traceability, it ensures that every stage of the design process is aligned with the initial specifications. Its analytical features further enable a comprehensive evaluation of design progress, promoting efficiency and thoroughness throughout the development lifecycle.
The Veyron V2 CPU takes the innovation witnessed in its predecessor and propels it further, offering unparalleled performance for AI and data center-class applications. This successor to the V1 CPU integrates seamlessly into environments requiring high computational power and efficiency, making it perfect for modern data challenges. Built upon RISC-V's architecture, it provides an open-standard alternative to traditional closed processor models. With a heavy emphasis on AI and machine learning workloads, Veyron V2 is designed to excel in handling complex data-centric tasks. This CPU can quickly adapt to multifaceted requirements, proving indispensable from enterprise servers to hyperscale data centers. Its superior design enables it to outperform many contemporary alternatives, positioning it as a lead component for next-generation computing solutions. The processor's adaptability allows for rapid and smooth integration into existing systems, facilitating quick upgrades and enhancements tailored to specific operational needs. As the Veyron V2 CPU is highly energy-efficient, it empowers data centers to achieve greater sustainability benchmarks without sacrificing performance.
The Metis AIPU M.2 Accelerator Module is a cutting-edge AI processing unit designed to boost the performance of edge computing tasks. This module integrates seamlessly with innovative applications, offering a robust solution for inference at the edge. It excels in vision AI tasks with its dedicated 512MB LPDDR4x memory, providing the necessary storage for complex tasks. Offering unmatched energy efficiency, the Metis AIPU M.2 module is capable of delivering significant performance gains while maintaining minimal power consumption. At an accessible price point, this module opens up AI processing capabilities for a variety of applications. As an essential component of next-generation vision processing systems, it is ideal for industries seeking to implement AI technologies swiftly and effectively.
The Origin E8 NPUs represent Expedera's cutting-edge solution for environments demanding the utmost in processing power and efficiency. This high-performance core scales its TOPS capacity between 32 and 128 with single-core configurations, addressing complex AI tasks in automotive and data-centric operational settings. The E8’s architecture stands apart due to its capability to handle multiple concurrent tasks without any compromise in performance. This unit adopts Expedera's signature packet-based architecture for optimized parallel execution and resource management, removing the necessity for hardware-specific tweaks. The Origin E8 also supports high input resolutions up to 8K and integrates well across standard and custom neural networks, enhancing its utility in future-forward AI applications. Leveraging a flexible, scalable design, the E8 IP cores make use of an exhaustive software suite to augment AI deployment. Field-proven and already deployed in a multitude of consumer vehicles, Expedera's Origin E8 provides a robust, reliable choice for developers needing optimized AI inference performance, ideally suited for data centers and high-power automobile systems.
Akida IP stands as an advanced neuromorphic processor, emulating brain-like processing to efficiently handle sensor inputs at acquisition points. This digital processor offers superior performance, precision, and significant reductions in power usage. By facilitating localized AI/ML tasks, it decreases latency and enhances data privacy. Akida IP is built to infer and learn at the edge, offering highly customizable, event-based neural processing. The architecture of Akida IP is scalable and compact, supporting an extensive mesh network connection of up to 256 nodes. Each node includes four Neural Network Layer Engines (NPEs), configurable for convolutional and fully connected processes. By leveraging data sparsity, Akida optimizes operation reduction, making it a cost-effective solution for various edge AI applications. Including MetaTF support for model simulations, Akida IP brings a fully synthesizable RTL IP package compatible with standard EDA tools, emphasizing ease of integration and deployment. This enables developers to swiftly design, develop, and implement custom AI solutions with robust security and privacy protection.
The SCR9 Processor Core is a cutting-edge processor designed for entry-level server-class and personal computing applications. Featuring a 12-stage dual-issue out-of-order pipeline, it supports robust RISC-V extensions including vector operations and a high-complexity memory system. This core is well-suited for high-performance computing, offering exceptional power efficiency with multicore coherence and the ability to integrate accelerators, making it suitable for areas like AI, ML, and enterprise computing.
The NaviSoC, a flagship product of ChipCraft, combines a GNSS receiver with an on-chip application processor, providing an all-in-one solution for high-precision navigation and timing applications. This product is designed to meet the rigorous demands of industries such as automotive, UAVs, and smart agriculture. One of its standout features is the ability to support all major global navigation satellite systems, offering versatile functionality for various professional uses. The NaviSoC is tailored for high efficiency, delivering performance that incorporates low power consumption with robust computational capabilities. Specifically tailored for next-generation applications, NaviSoC offers flexibility through its ability to be adapted for different tasks, making it a preferred choice for many industries. It integrates seamlessly into systems requiring precision and reliability, providing developers with a wide array of programmable peripherals and interfaces. The foundational design ethos of the NaviSoC revolves around minimizing power usage while ensuring high precision and accuracy, making it an ideal component for battery-powered and portable devices. Additionally, ChipCraft provides integrated software development tools and navigation firmware, ensuring that clients can capitalize on fast time-to-market for their products. The design of the NaviSoC takes a comprehensive approach, factoring in real-world application requirements such as temperature variation and environmental challenges, thus providing a resilient and adaptable product for diverse uses.
The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.
The iniCPU IP core is a sophisticated processor module by Inicore designed to provide robust computational capabilities for system-on-chip developments. It integrates smoothly into a variety of applications, offering a balance of performance and low power consumption. This CPU core is adaptable, allowing for design scalability across multiple applications from consumer electronics to industrial automation. Engineered for efficiency, the iniCPU is structured to handle complex workloads with ease, serving as a pivotal component in integrated system solutions. Its architecture supports extensive interfacing capabilities, which ensures the core can be coupled with various peripherals, enhancing system-wide functionality and performance. Inicore's iniCPU stands out in its versatility, supporting seamless transition from FPGA prototyping to ASIC deployment. This flexibility shortens product development cycles and helps companies bring innovative products to market faster. The IP core’s robust design methodology ensures it meets stringent industry standards for reliability and performance.
The RV12 is a flexible RISC-V CPU designed for embedded applications. It stands as a single-core processor, compatible with RV32I and RV64I architectures, offering a configurable solution that adheres to the industry-standard RISC-V instruction set. The processor's Harvard architecture supports concurrent instruction and data memory accesses, optimizing its operation for a wide array of embedded tasks.
The eSi-3200 is a compact 32-bit processor core created for low-power and high-efficiency scenarios. Designed for seamless integration into ASICs and FPGAs, it operates optimally in embedded control applications. The architecture supports up to 32 general purpose registers and a broad instruction set aimed at minimizing power and resource usage. This processor features a continuous instruction pipeline and optional floating point unit, making it capable of handling complex arithmetic and signal processing tasks with ease. Its design, void of a cache system, allows deterministic timing crucial for real-time control applications. The processor's capabilities are accentuated by robust debugging tools and AMBA-compliant connectivity, which facilitate straightforward system integration. This makes the eSi-3200 an ideal choice for engineers looking to design responsive and energy-efficient control systems.
The AndeShape Platforms are designed to streamline system development by providing a diverse suite of IP solutions for SoC architecture. These platforms encompass a variety of product categories, including the AE210P for microcontroller applications, AE300 and AE350 AXI fabric packages for scalable SoCs, and AE250 AHB platform IP. These solutions facilitate efficient system integration with Andes processors. Furthermore, AndeShape offers a sophisticated range of development platforms and debugging tools, such as ADP-XC7K160/410, which reinforce the system design and verification processes, providing a comprehensive environment for the innovative realization of IoT and other embedded applications.
The eSi-1600 is a 16-bit central processing unit designed for cost-effective and low-power operation, making it ideal for integration into ASICs and FPGAs. It is engineered to deliver performance comparable to more advanced processors, with costs rivaling those of simpler CPUs. This processor is well-suited for control applications requiring under 64kB of memory in mixed-signal processes. With a RISC architecture comprising 16 or 32 general-purpose registers, the eSi-1600 supports optimal instruction density and execution efficiency. Its capabilities are leveraged through features like user-defined instructions and sophisticated interrupt handling, enhancing performance for various applications. The processor's efficient pipeline architecture allows for high-frequency operation, even in older process nodes, while minimizing power consumption. Comprehensive hardware debugging facilities, including JTAG support and an optional memory protection unit, accompany the eSi-1600. This makes it a flexible and efficient solution for applications needing robust control processing, serving as a bridge between 8-bit efficiency and higher performance 32-bit systems.
Engineered for high-performance tasks, the eSi-3250 is a 32-bit processor core tailor-made for systems demanding significant computational power coupled with slow memory interfacing. It is particularly well-suited for integration where caching plays a pivotal role due to the involved use of high-latency memories, such as external flash. The core is equipped with configurable instruction and data caches and features a wide range of interrupts, accommodating user and supervisor modes efficiently. It supports the integration of a memory management unit for enhanced memory protection and virtual memory implementation. Delivering superior performance with a structured architectural design, the eSi-3250 is adept at managing both power and performance needs. It is widely applicable to areas needing enhanced processing capabilities within tightly controlled memory access environments.
The High Performance RISC-V Processor from Cortus represents the forefront of high-end computing, designed for applications demanding exceptional processing speeds and throughput. It features an out-of-order execution core that supports both single-core and multi-core configurations for diverse computing environments. This processor specializes in handling complex tasks requiring multi-threading and cache coherency, making it suitable for applications ranging from desktops and laptops to high-end servers and supercomputers. It includes integrated vector and AI accelerators, enhancing its capability to manage intensive data-processing workloads efficiently. Furthermore, this RISC-V processor is adaptable for advanced embedded systems, including automotive central units and AI applications in ADAS, providing enormous potential for innovation and performance across various markets.
The MIPITM V-NLM-01 is a Non-Local Means (NLM) image noise reduction core, designed to enhance image quality by minimizing noise while preserving detail. This core is highly configurable, allowing users to customize the search window size and the number of bits per pixel, thereby tailoring the noise reduction process to specific application demands. Specially optimized for HDMI output resolutions of 2048x1080 and frame rates from 30 to 60 fps, the V-NLM-01 utilizes an efficient algorithmic approach to deliver natural and artifact-free images. Its parameterized implementation ensures adaptability across various image processing environments, making it essential for applications where high fidelity image quality is critical. The V-NLM-01 exemplifies VLSI Plus Ltd.'s prowess in developing specialized IP cores that significantly enhance video quality. Its capacity to effectively process high-definition video data makes it suitable for integration in a wide range of digital video platforms, ensuring optimal visual output.
Origin E2 NPUs focus on delivering power and area efficiency, making them ideal for on-device AI applications in smartphones and edge nodes. These processing units support a wide range of neural networks, including video, audio, and text-based applications, all while maintaining impressive performance metrics. The unique packet-based architecture ensures effective performance with minimal latency and eliminates the need for hardware-specific optimizations. The E2 series offers customization options allowing it to fit specific application needs perfectly, with configurations supporting up to 20 TOPS. This flexibility represents significant design advancements that help increase processing efficiency without introducing latency penalties. Expedera's power-efficient design results in NPUs with industry-leading performance at 18 TOPS per Watt. Further augmenting the value of E2 NPUs is their ability to run multiple neural network types efficiently, including LLMs, CNNs, RNNs, and others. The IP is field-proven, deployed in over 10 million consumer devices, reinforcing its reliability and effectiveness in real-world applications. This makes the Origin E2 an excellent choice for companies aiming to enhance AI capabilities while managing power and area constraints effectively.
The Mixed-Signal CODEC offered by Archband Labs is engineered to enhance the performance of audio and voice devices, handling conversions between analog and digital signals efficiently. Designed to cater to various digital audio interfaces such as PWM, PDM, PCM conversions, I2S, and TDM, it ensures seamless integration into complex audio systems. Well-suited for low-power and high-performance applications, this CODEC is frequently deployed in audio systems across consumer electronics, automotive, and edge computing devices. Its robust design ensures reliable operation within wearables, smart home devices, and advanced home entertainment systems, handling pressing demands for clarity and efficiency in audio signal processing. Engineers benefit from its extensive interfacing capabilities, supporting a spectrum of audio inputs and outputs. The CODEC's compact architecture ensures ease of integration, allowing manufacturers to develop innovative and enhanced audio platforms that meet diverse market needs.
NeuroMosAIc Studio is a comprehensive software platform designed to support AI modeling and deployment, offering a suite of tools that streamline the development of neural networks for high-performance computing tasks. This solution facilitates converting, compressing, and optimizing AI models ready for integration with AiM Future’s hardware accelerators. The platform includes features like network quantization, using formats like 1-bit, FXP8, and FXP16, and network optimization for efficient hardware generation. Advanced mapping tools and precision analysis components are in place to ensure optimal deployment and performance alignment with AiM Future's accelerators. Users can leverage the studio's compilation, simulation, and AI-aware training capabilities to refine AI models both in the cloud and at the edge, optimizing quantization and adjustment processes. NeuroMosAIc Studio is pivotal in enhancing the performance of AI solutions across various applications, from smart city management to advanced AR/VR experiences.
The TimbreAI T3 is an ultra-low-power AI engine specifically designed for audio applications, providing optimal performance in noise reduction tasks for devices like headsets. Known for its energy efficiency, the T3 operates with less than 300 microwatts of power consumption, allowing it to support performance-hungry applications without requiring external memory. The innovative architecture of TimbreAI leverages a packet-based framework focusing on achieving superior power efficiency and customization to the specific requirements of audio neural networks. This tailored engineering ensures no alteration is needed in trained models to achieve the desired performance metrics, thereby establishing a new standard in energy-efficient AI deployments across audio-centric devices. Geared towards consumer electronics and wearables, the T3 extends the potential for battery life in TWS headsets and similar devices by significantly reducing power consumption. With its preconfiguration for handling common audio network functions, TimbreAI provides a seamless development environment for OEMs eager to integrate AI capabilities with minimal power and area overheads.
Cortus's Automotive AI Inference SoC is a breakthrough solution tailored for autonomous driving and advanced driver assistance systems. This SoC combines efficient image processing with AI inference capabilities, optimized for city infrastructure and mid-range vehicle markets. Built on a RISC-V architecture, the AI Inference SoC is capable of running specialized algorithms, akin to those in the Yolo series, for fast and accurate image recognition. Its low power consumption makes it suitable for embedded automotive applications requiring enhanced processing without compromising energy efficiency. This chip demonstrates its adequacy for Level 2 and Level 4 autonomous driving systems, providing a comprehensive AI-driven platform that enhances safety and operational capabilities in urban settings.
The Chimera GPNPU by Quadric is a versatile processor specifically designed to enhance machine learning inference tasks on a broad range of devices. It provides a seamless blend of traditional digital signal processing (DSP) and neural processing unit (NPU) capabilities, which allow it to handle complex ML networks alongside conventional C++ code. Designed with a focus on adaptability, the Chimera GPNPU architecture enables easy porting of various models and software application programming, making it a robust solution for rapidly evolving AI technologies. A key feature of the Chimera GPNPU is its scalable design, which extends from 1 to a remarkable 864 TOPs, catering to applications from standard to advanced high-performance requirements. This scalability is coupled with its ability to support a broad range of ML networks, such as classic backbones, vision transformers, and large language models, fulfilling various computational needs across industries. The Chimera GPNPU also excels in automotive applications, including ADAS and ECU systems, due to its ASIL-ready design. The processor's hybrid architecture merges Von Neumann and 2D SIMD matrix capabilities, promoting efficient execution of scalar, vector, and matrix operations. It boasts a deterministic execution pipeline and extensive customization options, including configurable instruction caches and local register memories that optimize memory usage and power efficiency. This design effectively reduces off-chip memory accesses, ensuring high performance while minimizing power consumption.
The KL630 AI SoC embodies next-generation AI chip technology with a pioneering NPU architecture. It uniquely supports Int4 precision and transformer networks, offering superb computational efficiency combined with low power consumption. Utilizing an ARM Cortex A5 CPU, it supports a range of AI frameworks and is built to handle scenarios from smart security to automotives, providing robust capability in both high and low light conditions.
The NMP-550 stands out as a performance-focused accelerator catered towards applications necessitating high efficiency, especially in demanding fields such as automotive, drones, and AR/VR. This technology caters to various application needs, including driver monitoring, image/video analytics, and heightened security measures through its powerful architecture and processing capability. Boasting a significant computation potential of up to 6 TOPS, the NMP-550 includes up to 6 MB of local memory. Featuring RISC-V/Arm Cortex-M or A 32-bit CPUs, the product ensures robust processing for advanced applications. The triple AXI4 interface provides a seamless 128-bit data exchange across hosts, CPUs, and data channels, magnifying flexibility for technology integrators. Ideal for medical devices, this product also expands its utility into security and surveillance, supporting crucial processes like super-resolution and fleet management. Its comprehensive design and efficiency make it an optimal choice for applications demanding elevated performance within constrained resources.
The xcore.ai platform by XMOS Semiconductor is a sophisticated and cost-effective solution aimed specifically at intelligent IoT applications. Harnessing a unique multi-threaded micro-architecture, xcore.ai provides superior low latency and highly predictable performance, tailored for diverse industrial needs. It is equipped with 16 logical cores divided across two multi-threaded processing tiles. These tiles come enhanced with 512 kB of SRAM and a vector unit supporting both integer and floating-point operations, allowing it to process both simple and complex computational demands efficiently. A key feature of the xcore.ai platform is its powerful interprocessor communication infrastructure, which enables seamless high-speed communication between processors, facilitating ultimate scalability across multiple systems on a chip. Within this homogeneous environment, developers can comfortably integrate DSP, AI/ML, control, and I/O functionalities, allowing the device to adapt to specific application requirements efficiently. Moreover, the software-defined architecture allows optimal configuration, reducing power consumption and achieving cost-effective intelligent solutions. The xcore.ai platform shows impressive DSP capabilities, thanks to its scalar pipeline that achieves up to 32-bit floating-point operations and peak performance rates of up to 1600 MFLOPS. AI/ML capabilities are also robust, with support for various bit vector operations, making the platform a strong contender for AI applications requiring homogeneous computing environments and exceptional operator integration.
The eSi-Comms solution provides a highly parameterisable and configurable suite for communication ASIC designs. This comprehensive collection includes OFDM-based modem and DFE IPs supporting a vast array of contemporary air interface standards such as 4G, 5G, Wi-Fi, and DVB among others. It offers robust and efficient solutions for modulation, equalization, and error correction using advanced digital signal processing algorithms. With its capabilities specific to synchronization and demodulation across multiple standards, it equips systems for optimal data flow management. The adaptable DFE features support precision in digital frequency conversion and other enhancements, fortifying both the transmitting and receiving ends of communication systems. This IP empowers wireless sensors, remote metering, and cellular devices, ensuring seamless integration into a diverse range of communication applications.
The Origin E6 neural engines are built to push the boundaries of what's possible in edge AI applications. Supporting the latest in AI model innovations, such as generative AI and various traditional networks, the E6 scales from 16 to 32 TOPS, aimed at balancing performance, efficiency, and flexibility. This versatility is essential for high-demand applications in next-generation devices like smartphones, digital reality setups, and consumer electronics. Expedera’s E6 employs packet-based architecture, facilitating parallel execution that leads to optimal resource usage and eliminating the need for dedicated hardware optimizations. A standout feature of this IP is its ability to maintain up to 90% processor utilization even in complex multi-network environments, thus proving its robustness and adaptability. Crafted to fit various use cases precisely, E6 offers a comprehensive TVM-based software stack and is well-suited for tasks that require simultaneous running of numerous neural networks. This has been proven through its deployment in over 10 million consumer units. Its design effectively manages power and system resources, thus minimizing latency and maximizing throughput in demanding scenarios.
The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.
AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.
InCore's RISC-V Core-hub Generators are a revolutionary tool designed to give developers unparalleled control over their SoC designs. These generators enable the customization of core-hub configurations down to the ISA and microarchitecture level, promoting a tailored approach to chip creation. Built around the robustness of the RISC-V architecture, the Core-hub Generators support versatile application needs, allowing designers to innovate without boundaries. The Core-hub concept is pivotal to speeding up SoC development by offering a framework of diverse cores and optimized fabric components, including essential RISC-V UnCore features like PLICs, Debug, and Trace components. This systemic flexibility ensures that each core hub aligns with specific customer requirements, providing a bespoke design experience that enhances adaptability and resource utilization. By integrating efficient communication protocols and optimized processing capabilities, InCore's Core-hub Generators foster seamless data exchange across modules. This is essential for developing next-gen semiconductor solutions that require both high performance and security. Whether used in embedded systems, high-performance industrial applications, or sophisticated consumer electronics, these generators stand as a testament to InCore's commitment to innovation and engineering excellence.
The RISC-V Hardware-Assisted Verification by Bluespec is designed to expedite the verification process for RISC-V cores. This platform supports both ISA and system-level testing, adding robust features such as verifying standard and custom ISA extensions along with accelerators. Moreover, it offers scalable access through the AWS cloud, making verification available anytime and anywhere. This tool aligns with the needs of modern developers, ensuring thorough testing within a flexible and accessible framework.
The Titanium Ti375 FPGA from Efinix boasts a high-density, low-power configuration, ideal for numerous advanced computing applications. Built on the well-regarded Quantum compute fabric, this FPGA integrates a robust set of features including a hardened RISC-V block, SerDes transceiver, and LPDDR4 DRAM controller, enhancing its versatility in challenging environments. The Ti375 model is designed with an intuitive I/O interface, allowing seamless communication and data handling. Its innovative architecture ensures minimal power consumption without compromising on processing speed, making it highly suitable for portable and edge devices. The inclusion of MIPI D-PHY further expands its applications in image processing and high-speed data transmission tasks. This FPGA is aligned with current market demands, emphasizing efficiency and scalability. Its architecture allows for diverse design challenges, supporting applications that transcend traditional boundaries. Efinix’s commitment to delivering sophisticated yet energy-efficient solutions is embodied in the Titanium Ti375, enabling new possibilities in the realm of computing.
The DisplayPort Transmitter from Trilinear Technologies is designed to deliver high-performance video and audio transmission for an array of display applications. It adheres to the latest standards to guarantee seamless integration with contemporary devices. Optimized for efficiency, it provides not only superior video quality but also minimizes latency to ensure a smooth user experience. Engineered with flexibility in mind, the DisplayPort Transmitter supports various resolutions and refresh rates, making it suitable for a wide range of multimedia interfaces. It is developed to handle complex signal processing while remaining energy-efficient, a critical feature for applications requiring prolonged usage without substantial power consumption. This transmitter's robust architecture ensures compatibility and reliability, standing as a testament to Trilinear's commitment to quality. It undergoes rigorous testing to meet industry standards, ensuring that each product can withstand varying operational conditions without compromising on performance or reliability, which is indispensable in today's dynamic tech environment.
Inicore’s iniDSP is a dynamic 16-bit digital signal processor core engineered for high-performance signal processing applications across a spectrum of fields including audio, telecommunications, and industrial automation. It leverages efficient computation capabilities to manage complex algorithms and data-intensive tasks, making it an ideal choice for all DSP needs. The iniDSP is designed around a scalable architecture, permitting customization to fit specific processing requirements. It ensures optimal performance whether interpreting audio signals, processing image data, or implementing control algorithms. The flexibility of this DSP core is evident in its seamless transition from simulation environments to real-world applications, supporting rapid prototyping and effective deployment. Inicore’s dedication to delivering robust processing solutions is epitomized in the iniDSP's ability to manage extensive DSP tasks with precision and speed. This makes it a valuable component for developers looking to amplify signal processing capabilities and achieve higher efficiency in their system designs.
The Matchstiq™ X40 by Epiq Solutions is a compact, high-performance software-defined radio (SDR) system designed to harness the power of AI and machine learning at the RF edge. Its small form factor makes it suitable for payloads with size, weight, and power constraints. The unit offers RF coverage up to 18GHz with an instantaneous bandwidth up to 450MHz, making it an excellent choice for demanding environments requiring advanced signal processing and direction finding. One of the standout features of the Matchstiq™ X40 is its integration of Nvidia's Orin NX for CPU/GPU operations and an AMD Zynq Ultrascale+ FPGA, allowing for sophisticated data processing capabilities directly at the point of RF capture. This combination offers enhanced performance for real-time signal analysis and machine learning implementations, making it suited for a variety of high-tech applications. The device supports a variety of input/output configurations, including 1 GbE, USB 3.0, and GPSDO, ensuring compatibility with numerous host systems. It offers dual configurations that support up to four receivers and two transmitters, along with options for phase-coherent multi-channel operations, thereby broadening its usability across different mission-critical tasks.
The RWM6050 baseband modem by Blu Wireless represents a highly efficient advancement in mmWave technology, offering an economical and energy-saving option for high bandwidth and capacity applications. Developed alongside Renesas, the modem is configured to work with mmWave RF chipsets to deliver scalable multi-gigabit throughput across access and backhaul networks. This modem is ideal for applications requiring substantial data transfer across several hundred meters.\n\nThe RWM6050 leverages flexible channelization and advanced modulation support to enhance data rates with dual modems and integrated mixed-signal front-end processing. This ensures that the modem can effectively handle diverse use cases with varying bandwidth demands. Its versatile subsystems, including PHY, MAC, ADC/DAC, and beamforming, facilitate adaptive solutions for complex networking environments.\n\nA standout feature of the RWM6050 is its integrated network synchronization, ensuring high precision in data delivery. Designed to meet the futuristic needs of communication networks, it helps end-users achieve superior performance through its programmable real-time scheduler and digital front-end processing. Additionally, the modem's highly digital design supports robust, secure connections needed for next-generation connectivity solutions.
Ventana's System IP product suite is crucial for integrating the Veyron CPUs into a cohesive RISC-V based high-performance system. This integration set ensures the smooth operation and optimization of Ventana's processors, enhancing their applicability across various computational tasks. Particularly relevant for data centers and enterprise settings, this suite includes essential components such as IOMMU and CPX interfaces to streamline multiple workloads management efficiently. These systems IP products are built with a focus on optimized communication and processing efficiency, making them integral in achieving superior data throughput and system reliability. The design encompasses the necessities for robust virtualization and resource allocation, making it ideally suited for high-demand data environments requiring meticulous coordination between system components. By leveraging Ventana's System IP, users can ensure that their processors meet and exceed the performance needs typical in today's cloud-intensive and server-heavy operations. This capability makes the System IP a foundational element in creating a performance-optimized technology stack capable of sustaining diverse, modern technological demands.
The Ultra-Low-Power 64-Bit RISC-V Core by Micro Magic represents a significant advancement in energy-efficient computing. This core, operating at an astonishingly low 10mW while running at 1GHz, sets a new standard for low-power design in processors. Micro Magic's proprietary methods ensure that this core maintains high performance even at reduced voltages, making it a perfect fit for applications where power conservation is crucial. Micro Magic's RISC-V core is designed to deliver substantial computational power without the typical energy costs associated with traditional architectures. With capabilities that make it suitable for a wide array of high-demand tasks, this core leverages sophisticated design approaches to achieve unprecedented power efficiency. The core's impressive performance metrics are complemented by Micro Magic's specialized tools, which aid in integrating the core into larger systems. Whether for embedded applications or more demanding computational roles, the Ultra-Low-Power 64-Bit RISC-V Core offers a compelling combination of power and performance. The design's flexibility and power efficiency make it a standout among other processors, reaffirming Micro Magic's position as a leader in semiconductor innovation. This solution is poised to influence how future processors balance speed and energy usage significantly.
The Veyron V1 CPU represents an efficient, high-performance processor tailored to address a myriad of data center demands. As an advanced RISC-V architecture processor, it stands out by offering competitive performance compatible with the most current data center workloads. Designed to excel in efficiency, it marries performance with a sustainable energy profile, allowing for optimal deployment in various demanding environments. This processor brings flexibility to developers and data center operators by providing extensive customization options. Veyron V1's robust architecture is meant to enhance throughput and streamline operations, facilitating superior service provision across cloud infrastructures. Its compatibility with diverse integration requirements makes it ideal for a broad swath of industrial uses, encouraging scalability and robust data throughput. Adaptability is a key feature of Veyron V1 CPU, making it a preferred choice for enterprises looking to leverage RISC-V's open standards and extend the performance of their platforms. It aligns seamlessly with Ventana's broader ecosystem of products, creating excellence in workload delivery and resource management within hyperscale and enterprise environments.
The Dynamic Neural Accelerator II by EdgeCortix is a pioneering neural network core that combines flexibility and efficiency to support a broad array of edge AI applications. Engineered with run-time reconfigurable interconnects, it facilitates exceptional parallelism and efficient data handling. The architecture supports both convolutional and transformer neural networks, offering optimal performance across varied AI use cases. This architecture vastly improves upon traditional IP cores by dynamically reconfiguring data paths, which significantly enhances parallel task execution and reduces memory bandwidth usage. By adopting this approach, the DNA-II boosts its processing capability while minimizing energy consumption, making it highly effective for edge AI applications that require high output with minimal power input. Furthermore, the DNA-II's adaptability enables it to tackle inefficiencies often seen in batching tasks across other IP ecosystems. The architecture ensures that high utilization and low power consumption are maintained across operations, profoundly impacting sectors relying on edge AI for real-time data processing and decision-making.