Find IP Sell IP AI Assistant Chip Talk About Us
Log In

All IPs > Platform Level IP

Platform Level IP: Comprehensive Semiconductor Solutions

Platform Level IP is a critical category within the semiconductor IP ecosystem, offering a wide array of solutions that are fundamental to the design and efficiency of semiconductor devices. This category includes various IP blocks and cores tailored for enhancing system-level performance, whether in consumer electronics, automotive systems, or networking applications. Suitable for both embedded control and advanced data processing tasks, Platform Level IP encompasses versatile components necessary for building sophisticated, multicore systems and other complex designs.

Subcategories within Platform Level IP cover a broad spectrum of integration needs:

1. **Multiprocessor/DSP (Digital Signal Processing)**: This includes specialized semiconductor IPs for handling tasks that require multiple processor cores working in tandem. These IPs are essential for applications needing high parallelism and performance, such as media processing, telecommunications, and high-performance computing.

2. **Processor Core Dependent**: These semiconductor IPs are designed to be tightly coupled with specific processor cores, ensuring optimal compatibility and performance. They include enhancements that provide seamless integration with one or more predetermined processor architectures, often used in specific applications like embedded systems or custom computing solutions.

3. **Processor Core Independent**: Unlike core-dependent IPs, these are flexible solutions that can integrate with a wide range of processor cores. This adaptability makes them ideal for designers looking to future-proof their technological investments or who are working with diverse processing environments.

Overall, Platform Level IP offers a robust foundation for developing flexible, efficient, and scalable semiconductor devices, catering to a variety of industries and technological requirements. Whether enhancing existing architectures or pioneering new designs, semiconductor IPs in this category play a pivotal role in the innovation and evolution of electronic devices.

All semiconductor IP
205
IPs available
Platform Level IP
A/D Converter Amplifier Analog Front Ends Analog Subsystems Clock Synthesizer Coder/Decoder Graphics & Video Modules Photonics PLL Power Management RF Modules Sensor Temperature Sensor CAN CAN XL CAN-FD FlexRay LIN Other Safe Ethernet Arbiter Audio Controller Clock Generator CRT Controller Disk Controller DMA Controller GPU Input/Output Controller Interrupt Controller LCD Controller Peripheral Controller Receiver/Transmitter Timer/Watchdog AMBA AHB / APB/ AXI CXL D2D Gen-Z I2C IEEE1588 Interlaken MIL-STD-1553 MIPI PCI PowerPC RapidIO SAS SATA Smart Card USB V-by-One VESA Embedded Memories I/O Library Standard cell DDR Flash Controller HMC Controller Mobile DDR Controller NVM Express ONFI Controller RLDRAM Controller SD SDRAM Controller 2D / 3D ADPCM Audio Interfaces AV1 DVB H.263 H.264 H.265 H.266 Image Conversion JPEG MPEG 4 TICO VGA WMV Network on Chip Multiprocessor / DSP Processor Core Dependent Processor Core Independent AI Processor Audio Processor Building Blocks Coprocessor CPU DSP Core IoT Processor Microcontroller Other Processor Cores Security Processor Vision Processor Wireless Processor Content Protection Software Cryptography Cores Cryptography Software Library Embedded Security Modules Security Protocol Accelerators Security Subsystems 3GPP-5G 3GPP-LTE 802.11 Bluetooth CPRI Digital Video Broadcast GPS JESD 204A / JESD 204B Other W-CDMA Wireless USB ATM / Utopia Cell / Packet Error Correction/Detection Ethernet Fibre Channel HDLC Interleaver/Deinterleaver Modulation/Demodulation Optical/Telecom
Vendor

Akida 2nd Generation

The 2nd Generation Akida processor introduces groundbreaking enhancements to BrainChip's neuromorphic processing platform, particularly ideal for intricate network models. It integrates eight-bit weight and activation support, improving energy efficiency and computational performance without enlarging model size. By supporting an extensive application set, Akida 2nd Generation addresses diverse Edge AI needs untethered from cloud dependencies. Notably, Akida 2nd Generation incorporates Temporal Event-Based Neural Nets (TENNs) and Vision Transformers, facilitating robust tracking through high-speed vision and audio processing. Its built-in support for on-chip learning further optimizes AI efficiency by reducing reliance on cloud training. This versatile processor fits perfectly for spatio-temporal applications across industrial, automotive, and healthcare sectors. Developers gain from its Configurable IP Platform, which allows seamless scalability across multiple use cases. The Akida ecosystem, including MetaTF, offers developers a strong foundation for integrating cutting-edge AI capabilities into Edge systems, ensuring secure and private data processing.

Brainchip
830 Views
TSMC
20nm
AI Processor, Digital Video Broadcast, IoT Processor, Multiprocessor / DSP, Security Protocol Accelerators, Vision Processor
View Details

NMP-750

The NMP-750 is designed as a cutting-edge performance accelerator for edge computing, tailored to address challenges in sectors like automotive, telecommunications, and smart factories. This product offers ample support for mobility, autonomous control, and process automation, setting a benchmark in high-performance computing for varied applications. With a processing power of up to 16 TOPS and 16 MB of local memory, it supports RISC-V/Arm Cortex-R or A 32-bit CPUs for substantial computational tasks. Its architecture supports a rich set of applications, including multi-camera stream processing and energy management, enabled through its AXI4 128-bit interfaces that manage extensive data traffic efficiently. This accelerator is particularly suited for complex scenarios such as spectral efficiency and smart building management, offering unparalleled performance capabilities. Designed for scalability and reliability, the NMP-750 reaches beyond traditional computing barriers, ensuring outstanding performance in real-time applications and next-gen technology deployments.

AiM Future
126 Views
AI Processor, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

NMP-350

The NMP-350 is specifically designed to serve as a cost-effective endpoint accelerator with a strong emphasis on low power consumption, making it ideal for various applications in AIoT, automotive, and smart appliances. This product is equipped with a robust architecture to facilitate myriad applications, such as driver authentication, digital mirrors, and predictive maintenance, while ensuring efficient resource management. Capable of delivering up to 1 TOPS, the NMP-350 integrates up to 1 MB of local memory, supporting RISC-V/Arm Cortex-M 32-bit CPU cores. It utilizes a triple AXI4 interface, each with a capacity of 128 bits, to manage host, CPU, and data traffic seamlessly. This architecture supports a host of applications in wearables, Industry 4.0, and health monitoring, adding significant value to futuristic technology solutions. Strategically targeting markets like AIoT/sensors and smart appliances, the NMP-350 positions itself as a favored choice for developing low-cost, power-sensitive device solutions. As industries gravitate toward energy-efficient technologies, products like NMP-350 offer a competitive edge in facilitating smart, green development processes.

AiM Future
106 Views
AI Processor, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

Metis AIPU PCIe AI Accelerator Card

Designed for high-performance applications, the Metis AIPU PCIe AI Accelerator Card employs four Metis AI Processing Units to deliver exceptional computational power. With its ability to reach up to 856 TOPS, this card is tailored for demanding vision applications, making it suitable for real-time processing of multi-channel video data. The PCIe form factor ensures easy integration into existing systems, while the customized software platform simplifies the deployment of neural networks for tasks like YOLO object detection. This accelerator card ensures scalability and efficiency, allowing developers to implement AI applications that are both powerful and cost-effective. The card’s architecture also takes advantage of RISC-V and Digital-In-Memory Computing technologies, bringing substantial improvements in speed and power efficiency.

Axelera AI
102 Views
TSMC
20nm
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Processor Core Dependent, Processor Core Independent, Vision Processor, WMV
View Details

WAVE6

WAVE6 represents the pinnacle of multi-standard video coding. It supports AV1 encoding, known for its efficient use of bandwidth and high compression quality. Featuring a simple architecture, it boasts a single-clock domain that synchronizes the entropy and video codec engines on the fly. The efficiency of WAVE6 is further enhanced by its power-efficient design, which minimizes consumer energy requirements through effective clock gating. It serves various sectors, including data centers and surveillance systems, operating with a remarkable performance of up to 8K60fps @ 1GHz. The integration of advanced coding techniques ensures a reduced need for external memory, thanks to the proprietary CFrame lossless compression.

Chips&Media
102 Views
AV1, Cell / Packet, Graphics & Video Modules, H.264, H.265, H.266, MIPI, MPEG 4, Multiprocessor / DSP
View Details

Origin E1

The Origin E1 neural engines by Expedera redefine efficiency and customization for low-power AI solutions. Specially crafted for edge devices like home appliances and security cameras, these engines serve ultra-low power applications that demand continuous sensing capabilities. They minimize power consumption to as low as 10-20mW, keeping data secure and eliminating the need for external memory access. The advanced packet-based architecture enhances performance by facilitating parallel layer execution, thereby optimizing resource utilization. Designed to be a perfect fit for dedicated AI functions, Origin E1 is tailored to support specific neural networks efficiently while reducing silicon area and system costs. It supports various neural networks, from CNNs to RNNs, making it versatile for numerous applications. This engine is also one of the most power-efficient in the industry, boasting an impressive 18 TOPS per Watt. Origin E1 also offers a full TVM-based software stack for easy integration and performance optimization across customer platforms. It supports a wide array of data types and networks, ensuring flexibility and sustained power efficiency, averaging 80% utilization. This makes it a reliable choice for OEMs looking for high performance in always-sensing applications, offering a competitive edge in both power efficiency and security.

Expedera
99 Views
11 Categories
View Details

ORC3990 – DMSS LEO Satellite Endpoint System On Chip (SoC)

The ORC3990 SoC is a state-of-the-art solution designed for satellite IoT applications within Totum's DMSSâ„¢ network. This low-power sensor-to-satellite system integrates an RF transceiver, ARM CPUs, memories, and PA to offer seamless IoT connectivity via LEO satellite networks. It boasts an optimized link budget for effective indoor signal coverage, eliminating the need for additional GNSS components. This compact SoC supports industrial temperature ranges and is engineered for a 10+ year battery life using advanced power management.

Orca Systems Inc.
98 Views
TSMC
22nm
3GPP-5G, Bluetooth, Processor Core Independent, RF Modules, USB, Wireless Processor
View Details

Veyron V2 CPU

The Veyron V2 CPU takes the innovation witnessed in its predecessor and propels it further, offering unparalleled performance for AI and data center-class applications. This successor to the V1 CPU integrates seamlessly into environments requiring high computational power and efficiency, making it perfect for modern data challenges. Built upon RISC-V's architecture, it provides an open-standard alternative to traditional closed processor models. With a heavy emphasis on AI and machine learning workloads, Veyron V2 is designed to excel in handling complex data-centric tasks. This CPU can quickly adapt to multifaceted requirements, proving indispensable from enterprise servers to hyperscale data centers. Its superior design enables it to outperform many contemporary alternatives, positioning it as a lead component for next-generation computing solutions. The processor's adaptability allows for rapid and smooth integration into existing systems, facilitating quick upgrades and enhancements tailored to specific operational needs. As the Veyron V2 CPU is highly energy-efficient, it empowers data centers to achieve greater sustainability benchmarks without sacrificing performance.

Ventana Micro Systems
97 Views
Samsung, TSMC
12nm, 22nm
AI Processor, CPU, Processor Core Dependent, Processor Cores
View Details

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 Accelerator Module is a cutting-edge AI processing unit designed to boost the performance of edge computing tasks. This module integrates seamlessly with innovative applications, offering a robust solution for inference at the edge. It excels in vision AI tasks with its dedicated 512MB LPDDR4x memory, providing the necessary storage for complex tasks. Offering unmatched energy efficiency, the Metis AIPU M.2 module is capable of delivering significant performance gains while maintaining minimal power consumption. At an accessible price point, this module opens up AI processing capabilities for a variety of applications. As an essential component of next-generation vision processing systems, it is ideal for industries seeking to implement AI technologies swiftly and effectively.

Axelera AI
95 Views
GLOBALFOUNDARIES, Samsung, TSMC, UMC
20nm, 40nm, 55nm, 90nm
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Processor Core Dependent, Vision Processor, WMV
View Details

Origin E8

The Origin E8 NPUs represent Expedera's cutting-edge solution for environments demanding the utmost in processing power and efficiency. This high-performance core scales its TOPS capacity between 32 and 128 with single-core configurations, addressing complex AI tasks in automotive and data-centric operational settings. The E8’s architecture stands apart due to its capability to handle multiple concurrent tasks without any compromise in performance. This unit adopts Expedera's signature packet-based architecture for optimized parallel execution and resource management, removing the necessity for hardware-specific tweaks. The Origin E8 also supports high input resolutions up to 8K and integrates well across standard and custom neural networks, enhancing its utility in future-forward AI applications. Leveraging a flexible, scalable design, the E8 IP cores make use of an exhaustive software suite to augment AI deployment. Field-proven and already deployed in a multitude of consumer vehicles, Expedera's Origin E8 provides a robust, reliable choice for developers needing optimized AI inference performance, ideally suited for data centers and high-power automobile systems.

Expedera
95 Views
AI Processor, AMBA AHB / APB/ AXI, Building Blocks, Coprocessor, CPU, GPU, Processor Core Dependent, Processor Core Independent, Receiver/Transmitter, Vision Processor
View Details

H.264 FPGA Encoder and CODEC Micro Footprint Cores

A2e Technologies offers a cutting-edge H.264 FPGA Encoder and CODEC that promises the industry's smallest and fastest solution with ultra-low latency. This core is ITAR-compliant and adaptable, capable of delivering a 1080p60 video stream while retaining a minimal footprint. The H.264 core is engineered to adapt to unique pixel depths and resolutions, leveraging a modular design that allows for seamless integration into a variety of FPGA environments. Supporting both I and P frames, this encoder ensures robust video compression with customizable configurations for various applications. The core's flexibility extends to its ability to handle multiple video streams with differing sizes or compression ratios simultaneously. Its fully synchronous design supports resolutions up to 4096 x 4096, illustrating its capacity to manage high-definition sources effectively. The flexibility in design permits its application across FPGAs from numerous manufacturers, including Xilinx and AMD, making it versatile for diverse project requirements. With enhancements like an improved AXI wrapper for better integration and significant reductions in RAM needs for raster-to-macroblock transformations, A2e's H.264 Encoder is equipped for high performance. It supports a variety of encoding styles with a processing rate of 1.5 clocks per pixel and includes comprehensive deliverables such as FPGA-specific netlists and testing environments to ensure a swift and straightforward deployment.

A2e Technologies
94 Views
AMBA AHB / APB/ AXI, Arbiter, H.264, Multiprocessor / DSP, TICO, USB
View Details

SCR9 Processor Core

The SCR9 Processor Core is a cutting-edge processor designed for entry-level server-class and personal computing applications. Featuring a 12-stage dual-issue out-of-order pipeline, it supports robust RISC-V extensions including vector operations and a high-complexity memory system. This core is well-suited for high-performance computing, offering exceptional power efficiency with multicore coherence and the ability to integrate accelerators, making it suitable for areas like AI, ML, and enterprise computing.

Syntacore
94 Views
All Foundries
All Process Nodes
AI Processor, CPU, Microcontroller, Processor Core Dependent, Processor Cores
View Details

AX45MP

The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.

Andes Technology
91 Views
2D / 3D, ADPCM, CPU, IoT Processor, Processor Core Independent, Processor Cores, Vision Processor
View Details

AndeShape Platforms

The AndeShape Platforms are designed to streamline system development by providing a diverse suite of IP solutions for SoC architecture. These platforms encompass a variety of product categories, including the AE210P for microcontroller applications, AE300 and AE350 AXI fabric packages for scalable SoCs, and AE250 AHB platform IP. These solutions facilitate efficient system integration with Andes processors. Furthermore, AndeShape offers a sophisticated range of development platforms and debugging tools, such as ADP-XC7K160/410, which reinforce the system design and verification processes, providing a comprehensive environment for the innovative realization of IoT and other embedded applications.

Andes Technology
86 Views
Embedded Memories, Microcontroller, Processor Core Dependent, Processor Core Independent, Standard cell
View Details

High Performance RISC-V Processor

The High Performance RISC-V Processor from Cortus represents the forefront of high-end computing, designed for applications demanding exceptional processing speeds and throughput. It features an out-of-order execution core that supports both single-core and multi-core configurations for diverse computing environments. This processor specializes in handling complex tasks requiring multi-threading and cache coherency, making it suitable for applications ranging from desktops and laptops to high-end servers and supercomputers. It includes integrated vector and AI accelerators, enhancing its capability to manage intensive data-processing workloads efficiently. Furthermore, this RISC-V processor is adaptable for advanced embedded systems, including automotive central units and AI applications in ADAS, providing enormous potential for innovation and performance across various markets.

Cortus SAS
86 Views
Intel Foundry, TSMC
16nm, 22nm
AI Processor, Building Blocks, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Security Processor
View Details

Origin E2

Origin E2 NPUs focus on delivering power and area efficiency, making them ideal for on-device AI applications in smartphones and edge nodes. These processing units support a wide range of neural networks, including video, audio, and text-based applications, all while maintaining impressive performance metrics. The unique packet-based architecture ensures effective performance with minimal latency and eliminates the need for hardware-specific optimizations. The E2 series offers customization options allowing it to fit specific application needs perfectly, with configurations supporting up to 20 TOPS. This flexibility represents significant design advancements that help increase processing efficiency without introducing latency penalties. Expedera's power-efficient design results in NPUs with industry-leading performance at 18 TOPS per Watt. Further augmenting the value of E2 NPUs is their ability to run multiple neural network types efficiently, including LLMs, CNNs, RNNs, and others. The IP is field-proven, deployed in over 10 million consumer devices, reinforcing its reliability and effectiveness in real-world applications. This makes the Origin E2 an excellent choice for companies aiming to enhance AI capabilities while managing power and area constraints effectively.

Expedera
86 Views
AI Processor, AMBA AHB / APB/ AXI, Building Blocks, Coprocessor, CPU, GPU, Processor Core Dependent, Processor Core Independent, Receiver/Transmitter, Vision Processor
View Details

Automotive AI Inference SoC

Cortus's Automotive AI Inference SoC is a breakthrough solution tailored for autonomous driving and advanced driver assistance systems. This SoC combines efficient image processing with AI inference capabilities, optimized for city infrastructure and mid-range vehicle markets. Built on a RISC-V architecture, the AI Inference SoC is capable of running specialized algorithms, akin to those in the Yolo series, for fast and accurate image recognition. Its low power consumption makes it suitable for embedded automotive applications requiring enhanced processing without compromising energy efficiency. This chip demonstrates its adequacy for Level 2 and Level 4 autonomous driving systems, providing a comprehensive AI-driven platform that enhances safety and operational capabilities in urban settings.

Cortus SAS
83 Views
Intel Foundry, TSMC
16nm, 22nm FD-SOI
AI Processor, Audio Processor, DSP Core, Multiprocessor / DSP, Processor Core Dependent, Vision Processor, W-CDMA
View Details

Chimera GPNPU

The Chimera GPNPU by Quadric is a versatile processor specifically designed to enhance machine learning inference tasks on a broad range of devices. It provides a seamless blend of traditional digital signal processing (DSP) and neural processing unit (NPU) capabilities, which allow it to handle complex ML networks alongside conventional C++ code. Designed with a focus on adaptability, the Chimera GPNPU architecture enables easy porting of various models and software application programming, making it a robust solution for rapidly evolving AI technologies. A key feature of the Chimera GPNPU is its scalable design, which extends from 1 to a remarkable 864 TOPs, catering to applications from standard to advanced high-performance requirements. This scalability is coupled with its ability to support a broad range of ML networks, such as classic backbones, vision transformers, and large language models, fulfilling various computational needs across industries. The Chimera GPNPU also excels in automotive applications, including ADAS and ECU systems, due to its ASIL-ready design. The processor's hybrid architecture merges Von Neumann and 2D SIMD matrix capabilities, promoting efficient execution of scalar, vector, and matrix operations. It boasts a deterministic execution pipeline and extensive customization options, including configurable instruction caches and local register memories that optimize memory usage and power efficiency. This design effectively reduces off-chip memory accesses, ensuring high performance while minimizing power consumption.

Quadric
83 Views
TSMC, UMC
22nm, 28nm, 55nm
AI Processor, AMBA AHB / APB/ AXI, CPU, DSP Core, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, VGA, Vision Processor
View Details

NMP-550

The NMP-550 stands out as a performance-focused accelerator catered towards applications necessitating high efficiency, especially in demanding fields such as automotive, drones, and AR/VR. This technology caters to various application needs, including driver monitoring, image/video analytics, and heightened security measures through its powerful architecture and processing capability. Boasting a significant computation potential of up to 6 TOPS, the NMP-550 includes up to 6 MB of local memory. Featuring RISC-V/Arm Cortex-M or A 32-bit CPUs, the product ensures robust processing for advanced applications. The triple AXI4 interface provides a seamless 128-bit data exchange across hosts, CPUs, and data channels, magnifying flexibility for technology integrators. Ideal for medical devices, this product also expands its utility into security and surveillance, supporting crucial processes like super-resolution and fleet management. Its comprehensive design and efficiency make it an optimal choice for applications demanding elevated performance within constrained resources.

AiM Future
83 Views
AI Processor, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

xcore.ai

The xcore.ai platform by XMOS Semiconductor is a sophisticated and cost-effective solution aimed specifically at intelligent IoT applications. Harnessing a unique multi-threaded micro-architecture, xcore.ai provides superior low latency and highly predictable performance, tailored for diverse industrial needs. It is equipped with 16 logical cores divided across two multi-threaded processing tiles. These tiles come enhanced with 512 kB of SRAM and a vector unit supporting both integer and floating-point operations, allowing it to process both simple and complex computational demands efficiently. A key feature of the xcore.ai platform is its powerful interprocessor communication infrastructure, which enables seamless high-speed communication between processors, facilitating ultimate scalability across multiple systems on a chip. Within this homogeneous environment, developers can comfortably integrate DSP, AI/ML, control, and I/O functionalities, allowing the device to adapt to specific application requirements efficiently. Moreover, the software-defined architecture allows optimal configuration, reducing power consumption and achieving cost-effective intelligent solutions. The xcore.ai platform shows impressive DSP capabilities, thanks to its scalar pipeline that achieves up to 32-bit floating-point operations and peak performance rates of up to 1600 MFLOPS. AI/ML capabilities are also robust, with support for various bit vector operations, making the platform a strong contender for AI applications requiring homogeneous computing environments and exceptional operator integration.

XMOS Semiconductor
82 Views
All Foundries
All Process Nodes
ADPCM, AI Processor, Audio Processor, Building Blocks, Coprocessor, CPU, DSP Core, IoT Processor, Processor Core Dependent, Processor Cores, Receiver/Transmitter, Vision Processor
View Details

Origin E6

The Origin E6 neural engines are built to push the boundaries of what's possible in edge AI applications. Supporting the latest in AI model innovations, such as generative AI and various traditional networks, the E6 scales from 16 to 32 TOPS, aimed at balancing performance, efficiency, and flexibility. This versatility is essential for high-demand applications in next-generation devices like smartphones, digital reality setups, and consumer electronics. Expedera’s E6 employs packet-based architecture, facilitating parallel execution that leads to optimal resource usage and eliminating the need for dedicated hardware optimizations. A standout feature of this IP is its ability to maintain up to 90% processor utilization even in complex multi-network environments, thus proving its robustness and adaptability. Crafted to fit various use cases precisely, E6 offers a comprehensive TVM-based software stack and is well-suited for tasks that require simultaneous running of numerous neural networks. This has been proven through its deployment in over 10 million consumer units. Its design effectively manages power and system resources, thus minimizing latency and maximizing throughput in demanding scenarios.

Expedera
81 Views
AI Processor, AMBA AHB / APB/ AXI, Building Blocks, Coprocessor, CPU, GPU, Processor Core Independent, Receiver/Transmitter, Vision Processor
View Details

WAVE5

Building on its predecessor, the WAVE5 series offers robust multi-standard video encoding capabilities with an established reputation within media and entertainment sectors. WAVE5 is versatile, boasting formats like HEVC and AVC, and delivers outstanding performance, with outputs like 4K240fps at 1GHz. It has been fine-tuned to handle complex multi-instance operations by efficiently managing data transfer and conversion tasks. Its ability to maintain high visual fidelity while offering low installation costs makes it a strategic choice for multiple application fields such as automotive and mobile entertainment. The use of secondary AXI ports and a fully integrated rotation and scaling mechanic add to its versatility.

Chips&Media
81 Views
Cell / Packet, Graphics & Video Modules, H.264, H.265, MIPI, MPEG 4, Multiprocessor / DSP
View Details

A25

The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.

Andes Technology
79 Views
CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores, Standard cell
View Details

AndesCore Processors

AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.

Andes Technology
79 Views
CPU, FlexRay, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

RISC-V Core-hub Generators

InCore's RISC-V Core-hub Generators are a revolutionary tool designed to give developers unparalleled control over their SoC designs. These generators enable the customization of core-hub configurations down to the ISA and microarchitecture level, promoting a tailored approach to chip creation. Built around the robustness of the RISC-V architecture, the Core-hub Generators support versatile application needs, allowing designers to innovate without boundaries. The Core-hub concept is pivotal to speeding up SoC development by offering a framework of diverse cores and optimized fabric components, including essential RISC-V UnCore features like PLICs, Debug, and Trace components. This systemic flexibility ensures that each core hub aligns with specific customer requirements, providing a bespoke design experience that enhances adaptability and resource utilization. By integrating efficient communication protocols and optimized processing capabilities, InCore's Core-hub Generators foster seamless data exchange across modules. This is essential for developing next-gen semiconductor solutions that require both high performance and security. Whether used in embedded systems, high-performance industrial applications, or sophisticated consumer electronics, these generators stand as a testament to InCore's commitment to innovation and engineering excellence.

InCore Semiconductors
79 Views
CPU, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

GSHARK

GSHARK is a high-performance GPU IP designed to accelerate graphics on embedded devices. Known for its extreme power efficiency and seamless integration, this GPU IP significantly reduces CPU load, making it ideal for use in devices like digital cameras and automotive systems. Its remarkable track record of over one hundred million shipments underscores its reliability and performance. Engineered with TAKUMI's proprietary architecture, GSHARK integrates advanced rendering capabilities. This architecture supports real-time, on-the-fly graphics processing similar to that found in PCs, smartphones, and gaming consoles, ensuring a rich user experience and efficient graphics applications. This IP excels in environments where power consumption and performance balance are crucial. GSHARK is at the forefront of embedded graphics solutions, providing significant improvements in processing speed while maintaining low energy usage. Its architecture easily handles demanding graphics rendering tasks, adding considerable value to any embedded system it is integrated into.

TAKUMI Corporation
78 Views
GPU, Processor Core Independent
View Details

Titanium Ti375 - High-Density, Low-Power FPGA

The Titanium Ti375 FPGA from Efinix boasts a high-density, low-power configuration, ideal for numerous advanced computing applications. Built on the well-regarded Quantum compute fabric, this FPGA integrates a robust set of features including a hardened RISC-V block, SerDes transceiver, and LPDDR4 DRAM controller, enhancing its versatility in challenging environments. The Ti375 model is designed with an intuitive I/O interface, allowing seamless communication and data handling. Its innovative architecture ensures minimal power consumption without compromising on processing speed, making it highly suitable for portable and edge devices. The inclusion of MIPI D-PHY further expands its applications in image processing and high-speed data transmission tasks. This FPGA is aligned with current market demands, emphasizing efficiency and scalability. Its architecture allows for diverse design challenges, supporting applications that transcend traditional boundaries. Efinix’s commitment to delivering sophisticated yet energy-efficient solutions is embodied in the Titanium Ti375, enabling new possibilities in the realm of computing.

Efinix, Inc.
78 Views
18 Categories
View Details

iniDSP

Inicore’s iniDSP is a dynamic 16-bit digital signal processor core engineered for high-performance signal processing applications across a spectrum of fields including audio, telecommunications, and industrial automation. It leverages efficient computation capabilities to manage complex algorithms and data-intensive tasks, making it an ideal choice for all DSP needs. The iniDSP is designed around a scalable architecture, permitting customization to fit specific processing requirements. It ensures optimal performance whether interpreting audio signals, processing image data, or implementing control algorithms. The flexibility of this DSP core is evident in its seamless transition from simulation environments to real-world applications, supporting rapid prototyping and effective deployment. Inicore’s dedication to delivering robust processing solutions is epitomized in the iniDSP's ability to manage extensive DSP tasks with precision and speed. This makes it a valuable component for developers looking to amplify signal processing capabilities and achieve higher efficiency in their system designs.

Inicore Inc.
77 Views
Audio Processor, DSP Core, Processor Core Dependent
View Details

System IP

Ventana's System IP product suite is crucial for integrating the Veyron CPUs into a cohesive RISC-V based high-performance system. This integration set ensures the smooth operation and optimization of Ventana's processors, enhancing their applicability across various computational tasks. Particularly relevant for data centers and enterprise settings, this suite includes essential components such as IOMMU and CPX interfaces to streamline multiple workloads management efficiently. These systems IP products are built with a focus on optimized communication and processing efficiency, making them integral in achieving superior data throughput and system reliability. The design encompasses the necessities for robust virtualization and resource allocation, making it ideally suited for high-demand data environments requiring meticulous coordination between system components. By leveraging Ventana's System IP, users can ensure that their processors meet and exceed the performance needs typical in today's cloud-intensive and server-heavy operations. This capability makes the System IP a foundational element in creating a performance-optimized technology stack capable of sustaining diverse, modern technological demands.

Ventana Micro Systems
77 Views
GLOBALFOUNDARIES, Samsung
20nm, 22nm FD-SOI
AMBA AHB / APB/ AXI, DSP Core, Input/Output Controller, Processor Core Dependent, USB
View Details

Ultra-Low-Power 64-Bit RISC-V Core

The Ultra-Low-Power 64-Bit RISC-V Core by Micro Magic represents a significant advancement in energy-efficient computing. This core, operating at an astonishingly low 10mW while running at 1GHz, sets a new standard for low-power design in processors. Micro Magic's proprietary methods ensure that this core maintains high performance even at reduced voltages, making it a perfect fit for applications where power conservation is crucial. Micro Magic's RISC-V core is designed to deliver substantial computational power without the typical energy costs associated with traditional architectures. With capabilities that make it suitable for a wide array of high-demand tasks, this core leverages sophisticated design approaches to achieve unprecedented power efficiency. The core's impressive performance metrics are complemented by Micro Magic's specialized tools, which aid in integrating the core into larger systems. Whether for embedded applications or more demanding computational roles, the Ultra-Low-Power 64-Bit RISC-V Core offers a compelling combination of power and performance. The design's flexibility and power efficiency make it a standout among other processors, reaffirming Micro Magic's position as a leader in semiconductor innovation. This solution is poised to influence how future processors balance speed and energy usage significantly.

Micro Magic, Inc.
77 Views
TSMC
28nm
AI Processor, CPU, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

Veyron V1 CPU

The Veyron V1 CPU represents an efficient, high-performance processor tailored to address a myriad of data center demands. As an advanced RISC-V architecture processor, it stands out by offering competitive performance compatible with the most current data center workloads. Designed to excel in efficiency, it marries performance with a sustainable energy profile, allowing for optimal deployment in various demanding environments. This processor brings flexibility to developers and data center operators by providing extensive customization options. Veyron V1's robust architecture is meant to enhance throughput and streamline operations, facilitating superior service provision across cloud infrastructures. Its compatibility with diverse integration requirements makes it ideal for a broad swath of industrial uses, encouraging scalability and robust data throughput. Adaptability is a key feature of Veyron V1 CPU, making it a preferred choice for enterprises looking to leverage RISC-V's open standards and extend the performance of their platforms. It aligns seamlessly with Ventana's broader ecosystem of products, creating excellence in workload delivery and resource management within hyperscale and enterprise environments.

Ventana Micro Systems
77 Views
Samsung, TSMC
28nm, 45nm
AI Processor, Coprocessor, CPU, Processor Core Dependent, Processor Cores
View Details

Dynamic Neural Accelerator II Architecture

The Dynamic Neural Accelerator II by EdgeCortix is a pioneering neural network core that combines flexibility and efficiency to support a broad array of edge AI applications. Engineered with run-time reconfigurable interconnects, it facilitates exceptional parallelism and efficient data handling. The architecture supports both convolutional and transformer neural networks, offering optimal performance across varied AI use cases. This architecture vastly improves upon traditional IP cores by dynamically reconfiguring data paths, which significantly enhances parallel task execution and reduces memory bandwidth usage. By adopting this approach, the DNA-II boosts its processing capability while minimizing energy consumption, making it highly effective for edge AI applications that require high output with minimal power input. Furthermore, the DNA-II's adaptability enables it to tackle inefficiencies often seen in batching tasks across other IP ecosystems. The architecture ensures that high utilization and low power consumption are maintained across operations, profoundly impacting sectors relying on edge AI for real-time data processing and decision-making.

EdgeCortix Inc.
76 Views
TSMC
16nm
AI Processor, Audio Processor, CPU, Cryptography Cores, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

Trion FPGAs - Edge and IoT Solution

The Trion FPGA family by Efinix addresses the dynamic needs of edge computing and IoT applications. These devices range from 4K to 120K logic elements, balancing computational capability with efficient power usage for a wide range of general-purpose applications. Trion FPGAs are designed to empower edge devices with rapid processing capabilities and flexible interfacing. They support a diverse array of use-cases, from industrial automation systems to consumable electronics requiring enhanced connectivity and real-time data processing. Offering a pragmatic solution for designers, Trion FPGAs integrate seamlessly into existing systems, facilitating swift development and deployment. They provide unparalleled adaptability to meet the intricate demands of modern technological environments, thereby enabling innovative edge and IoT solutions to flourish.

Efinix, Inc.
76 Views
18 Categories
View Details

Arria 10 System on Module

Dream Chip Technologies' Arria 10 System on Module (SoM) emphasizes embedded and automotive vision applications. Utilizing Altera's Arria 10 SoC Devices, the SoM is compact yet packed with powerful capabilities. It features a dual-core Cortex A9 CPU and supports up to 480 KLEs of FPGA logic elements, providing ample space for customization and processing tasks. The module integrates robust power management features to ensure efficient energy usage, with interfaces for DDR4 memory, PCIe Gen3, Ethernet, and 12G SDI among others, housed in a form factor measuring just 8 cm by 6.5 cm. Engineered to support high-speed data processing, the Arria 10 SoM includes dual DDR4 memory interfaces and 12 transceivers at 12 Gbit/s and above. It provides comprehensive connectivity options, including two USB ports, Gigabit Ethernet, and multiple GPIOs with level-shifting capabilities. This level of integration makes it optimal for developing solutions for automotive systems, particularly in scenarios requiring high-speed data and image processing. Additionally, the SoM comes with a suite of reference designs, such as the Intel Arria 10 Golden System Reference Design, to expedite development cycles. This includes pre-configured HPS and memory controller IP, as well as customized U-Boot and Angström Linux distributions, further enriching its utility in automotive and embedded domains.

Dream Chip Technologies GmbH
75 Views
AMBA AHB / APB/ AXI, CPU, Ethernet, GPU, MIPI, PCI, Processor Core Dependent, Processor Core Independent, SATA, V-by-One
View Details

RF/Analog IP

Certus Semiconductor specializes in advanced RF/Analog IP solutions, tackling the intricate needs of high-performance wireless communication systems. Their cutting-edge technology provides ultra-low power wireless front-end integration, verified across a range of silicon contexts to ensure reliability and excellence. These solutions cover a comprehensive spectrum of RF configurations from silicon-proven RF IPs to fully integrated RF transceivers used in state-of-the-art wireless devices. Features of Certus's RF/Analog solutions include finely tuned custom PLLs and LNAs with frequencies reaching up to 6GHz, tailored for superior phase noise performance and minimal jitter. This level of precision ensures optimized signal integrity and power efficiency, crucial for maintaining peak operations in wireless systems like LTE, WiFi, and GNSS. Furthermore, the innovative next-generation wireless IPs cater to ultra-low latency operations necessary for modern communication protocols, demonstrating Certus Semiconductor's commitment to driving forward-thinking technology in RF design. With an inclusive approach covering custom designs and off-the-shelf IP offerings, Certus ensures that each product meets specific project demands with exceptional precision and efficiency.

Certus Semiconductor
75 Views
GLOBALFOUNDARIES
10nm, 12nm, 22nm
3GPP-5G, Analog Front Ends, Fibre Channel, PLL, Processor Core Dependent, RF Modules, USB
View Details

Software-Defined High PHY

The Software-Defined High PHY from AccelerComm is an adaptable solution designed for ARM processor architectures, providing versatile platform support. This PHY can function with or without hardware acceleration, catering to varying demands in capacity and power for diverse applications. It seamlessly integrates into O-RAN environments and can be tailored to optimize performance, aligning with specific network requirements. This versatile solution capitalizes on the strengths of ARM processors and offers additional flexibility through optional hardware accelerations. By providing a scalable framework, the Software-Defined High PHY supports efficient deployment, regardless of the network size or complexity. This adaptability ensures that network managers can configure the PHY to meet specific performance objectives, maximizing throughput while minimizing power consumption. Esteemed for its flexibility and ease of integration, this solution exemplifies AccelerComm's commitment to delivering high-caliber, platform-independent PHY solutions. It empowers network developers to tailor their setups, thus enhancing performance metrics like latency and spectral efficiency.

AccelerComm Limited
75 Views
3GPP-5G, 3GPP-LTE, AMBA AHB / APB/ AXI, Multiprocessor / DSP, Processor Core Independent
View Details

Tyr Superchip

VSORA's Tyr Superchip epitomizes high-performance capabilities tailored for the demanding worlds of autonomous driving and generative AI. With its advanced multi-core architecture, this superchip can execute any algorithm efficiently without relying on CUDA, which promotes versatility in AI deployment. Built to deliver a seamless combination of AI and general-purpose processing, the Tyr Superchip utilizes sparsity techniques, supporting quantization on-the-fly, which optimizes its performance for a wide array of computational tasks. The Tyr Superchip is distinctive for its ability to support the simultaneous execution of AI and DSP tasks, selectable on a layer-by-layer basis, which provides unparalleled flexibility in workload management. This flexibility is further complemented by its low latency and power-efficient design, boasting performance near theoretical maximums, with support for next-generation algorithms and software-defined vehicles (SDVs). Safety is prioritized with the implementation of ISO26262/ASIL-D features, making the Tyr Superchip an ideal solution for the automotive industry. Its hardware is designed to handle the computational load required for safe and efficient autonomous driving, and its programmability allows for ongoing adaptations to new automotive standards and innovations.

VSORA
75 Views
GLOBALFOUNDARIES, Samsung, TSMC
16nm
AI Processor, Interleaver/Deinterleaver, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent
View Details

AON1020

The AON1020 expands AI processing capabilities to encompass not only voice and audio recognition but also a variety of sensor applications. It leverages the power of the AONSens Neural Network cores, offering a comprehensive solution that integrates Verilog RTL technology to support both ASIC and FPGA products. Key to the AON1020's appeal is its versatility in addressing various sensor data, such as human activity detection. This makes it indispensable in applications requiring nuanced responses to environmental inputs, from motion to gesture awareness. It deploys these capabilities while minimizing energy demands, aligning perfectly with the needs of battery-operated and wearable devices. By executing real-time analytics on device-stored data, the AON1020 ensures high accuracy in environments fraught with noise and user variability. Its architecture allows it to detect multiple commands simultaneously, enhancing device interaction while maintaining low power consumption. Thus, the AON1020 is not only an innovator in sensor data interaction but also a leader in ensuring extended device functionality without compromising energy efficiency or processing accuracy.

AONDevices, Inc.
74 Views
AI Processor, Audio Processor, CPU, DSP Core, Processor Core Dependent, Vision Processor
View Details

NPU

The NPU, part of the ENLIGHT series by OPENEDGES Technology, is designed as a deep learning accelerator focusing on inferencing computations with superior efficiency and compute density. Developed for high-performance edge computing, this neural processing unit supports a range of operations pertinent to deep neural networks, including convolution and pooling, providing state-of-the-art capability in both power and performance. The NPU's architecture is based on mixed-precision computation using 4-/8-bit quantization which significantly reduces DRAM traffic, thereby optimizing bandwidth utilization and power consumption. Its design incorporates an advanced vector engine optimized for modern deep neural network architectures, enriching its ability to modernize and scale with evolving AI workloads. Accompanying the hardware capabilities, the NPU offers a comprehensive software toolkit featuring network conversion, quantization, and simulation tools. This suite is built for compatibility with mainstream AI frameworks and ensures seamless integration and efficiency in real-world applications ranging from automotive systems to surveillance.

OPENEDGES Technology
74 Views
AI Processor, Microcontroller, Multiprocessor / DSP
View Details

NuLink Die-to-Die PHY for Standard Packaging

Eliyan's NuLink Die-to-Die PHY for standard packaging is a technological innovation designed to enhance chiplet interconnectivity within conventional package forms. Tailored to seamlessly integrate with both silicon bridges and organic package substrates, this product eliminates the need for advanced packaging solutions while matching their performance characteristics. By achieving the same remarkable levels of data transfer efficiency and power optimization typically associated with advanced methods, NuLink technology stands out as a cost-effective solution for multi-die integration. Targeted for ASIC designs, the NuLink Die-to-Die PHY is capable of supporting a wide array of industry standards including UCIe and BoW. Its design enables the connection of chiplets in standard packaging without requiring large silicon interposers, ensuring both significant performance gains and cost savings. This flexibility makes it particularly appealing for systems that require mixing and matching of chiplets of varying dimensions. In practical applications, Eliyan’s solution facilitates increased placement flexibility and supports configurations that demand physical separation of components, such as those between hot ASICs and heat-sensitive dies. By leveraging a standard packaging approach, this PHY product provides substantial improvements in thermal management, cost efficiency, and production timelines compared to traditional methods.

Eliyan
74 Views
GLOBALFOUNDARIES, TSMC
20nm, 28nm
AMBA AHB / APB/ AXI, CXL, D2D, MIPI, Network on Chip, Processor Core Dependent
View Details

ZIA Stereo Vision

ZIA Stereo Vision (SV) represents DMP's cutting-edge depth sensing solution, engineered to offer high-precision stereo vision for various AI applications. It's designed to process stereo images for advanced depth mapping, utilizing 4K inputs to facilitate distance estimation via stereo matching algorithms like Semi-Global Matching (SGM). Through this technique, ZIA SV ensures that distance information is extracted accurately, a critical capability for applications like autonomous mobile robots or advanced imaging systems. Pre- and post-processing optimization provides the ZIA SV with the tools necessary to refine depth estimates and ensure high accuracy. It supports 8-bit greyscale inputs and outputs a disparity map with an accuracy up to 0.8%, provided by advanced filtering techniques that enhance precision while maintaining a compact form factor crucial in embedded systems. This IP core integrates smoothly into systems requiring reliable depth measurement, utilizing efficient AMBA AXI interfaces for easy integration into diverse applications. With capabilities to support a wide range of hardware platforms and favorable performance-to-size ratios, the ZIA Stereo Vision core embodies DMP's philosophy of compact, high-performance solutions for smarter decision-making in machine vision applications.

Digital Media Professionals Inc. (DMP Inc.)
74 Views
2D / 3D, GPU, Graphics & Video Modules, Processor Core Independent, Vision Processor
View Details

Avispado

Avispado is a sophisticated 64-bit RISC-V core that emphasizes efficiency and adaptability within in-order execution frameworks. It's engineered to cater to energy-efficient SoC designs, making it an excellent choice for machine learning applications with its compact design and ability to seamlessly communicate with RISC-V Vector Units. By utilizing the Gazzillion Missesâ„¢ technology, the Avispado core effectively handles high sparsity in tensor weights, resulting in superior energy efficiency per operation. This core features a 2-wide in-order configuration and supports the RISC-V Vector Specification 1.0 as well as Semidynamics' Open Vector Interface. With support for large memory capacities, it includes complete MMU features and is Linux-ready, ensuring it's prepared for demanding computational tasks. The core's native CHI interface can be fine-tuned to AXI, promoting cache-coherent multiprocessing capabilities. Avispado is optimized for various demanding workloads, with optional extensions for specific needs such as bit manipulation and cryptography. The core's customizable configuration allows changes to its instruction and data cache sizes (I$ and D$ from 8KB to 32KB), ensuring it meets specific application demands while retaining operational efficiency.

Semidynamics
73 Views
GLOBALFOUNDARIES, TSMC
22nm
AI Processor, AMBA AHB / APB/ AXI, CPU, Processor Core Dependent, Processor Cores
View Details

General Purpose Accelerator (Aptos)

The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.

Ascenium
73 Views
TSMC
10nm, 12nm
CPU, Processor Core Dependent, Processor Core Independent, Processor Cores, Standard cell
View Details

SiFive Essential

The SiFive Essential family embodies a customizable range of processor cores, designed to fulfill various market-specific requirements. From small microcontroller units (MCUs) to more complex 64-bit processors capable of running operating systems, the Essential series provides flexibility in design and functionality. These processors support a diverse set of applications including IoT devices, real-time controls, and control plane processing. They offer scalable performance through sophisticated pipeline architectures, catering to both embedded and rich-OS environments. The Essential series offers advanced configurations which can be tailored to optimize for power and area footprint, making it suitable for devices where space and energy are limited. This aligns well with the needs of edge devices and other applications where efficiency and performance must meet in a balanced manner.

SiFive, Inc.
73 Views
Building Blocks, CPU, IoT Processor, Microcontroller, Processor Core Independent, Processor Cores
View Details

ULYSS MCU

The Cortus ULYSS range of automotive microcontrollers is engineered to meet the demands of sophisticated automotive applications, extending from body control to ADAS and infotainment systems. Utilizing a RISC-V architecture, these microcontrollers provide high performance and efficiency suitable for automotive tasks. Each variant within the ULYSS family caters to specific automotive functions, with capabilities ranging from basic energy management to complex networking and ADAS processing. For instance, the ULYSS1 caters to body control applications with a single-core CPU, while the ULYSS3 provides robust networking capabilities with a quad-core, lockstep MPU operating up to 1.5 GHz. The ULYSS line is structured to offer scalability and flexibility, allowing automotive manufacturers to integrate these solutions seamlessly into various components of a vehicle's electronic system. This focus on adaptability helps Cortus provide both a cost-effective and high-performance solution for its automotive partners.

Cortus SAS
72 Views
Samsung, TSMC
22nm, 90nm
AI Processor, Content Protection Software, CPU, Microcontroller, Multiprocessor / DSP
View Details

Time-Triggered Ethernet

Time-Triggered Ethernet is an enhanced network solution tailored for environments requiring stringent timing and synchronization. By leveraging the principles of time-triggered communication, it enhances standard Ethernet with deterministic capabilities. This advanced protocol is instrumental in ensuring timely and predictable data exchange, making it ideal for complex network architectures where timing precision is a must. Utilizing synchronized clocks across the network, Time-Triggered Ethernet virtually eliminates latency variability. This predictability across the Ethernet infrastructure supports a variety of applications, from aviation systems requiring certified safety levels to automotive networks needing high reliability. The protocol helps in managing critical tasks efficiently by scheduling communication activities down to precise microsecond accuracy. Time-Triggered Ethernet enhances both the fault tolerance and robustness of networks it supports, making it a preferred choice for high-stakes scenarios. Its ability to carry safety-critical and time-sensitive data over existing Ethernet infrastructure ensures wide applicability and adaptability. By optimizing performance while maintaining compatibility with Ethernet standards, it supports diverse applications from smart industry automation to critical aerospace systems.

TTTech Computertechnik AG
71 Views
Ethernet, FlexRay, MIL-STD-1553, Processor Core Independent
View Details

Codasip RISC-V BK Core Series

The Codasip RISC-V BK Core Series offers a versatile solution for a broad range of computing needs, from low-power embedded devices to high-performance applications. These cores leverage the open RISC-V instruction set architecture (ISA), enabling designers to take advantage of expansive customization options that optimize performance and efficiency. The BK Core Series supports high configurability, allowing users to adapt the microarchitecture and extend the instruction set based on specific application demands. Incorporating advanced features like zero-overhead loops and SIMD instructions, the BK Core Series is designed to handle computationally intensive tasks efficiently. This makes them ideal for applications in audio processing, AI, and other scenarios requiring high-speed data processing and computation. Additionally, these cores are rigorously verified to meet industry standards, ensuring robustness and reliability even in the most demanding environments. The BK Core Series also aligns with Codasip's focus on functional safety and security. These processors come equipped with features to bolster system reliability, helping prevent cyber threats and errors that could lead to system malfunctions. This makes the BK Core Series an excellent choice for industries that prioritize safety, such as automotive and industrial automation.

Codasip
69 Views
CPU, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

eSi-3264

The eSi-3264 processor core provides advanced DSP functionality within a 32/64-bit architecture, enhanced by Single Instruction, Multiple Data (SIMD) operations. This high-performance CPU is crafted to excel in tasks demanding significant digital signal processing power, such as audio processing or motion control applications. It incorporates advanced SIMD DSP extensions and floating point support, optimizing the core for parallel data processing. The architecture supplies options for extensive custom configurations including instruction and data caches to tailor performance to the specific demands of high-speed and low-power operations. The eSi-3264's hardware debug capabilities combined with its versatile pipeline make it an ideal match for high-precision computing environments where performance and efficiency are crucial. Its ability to handle complex arithmetic operations efficiently with minimal silicon area further cements its position as a leading solution in DSP-focused applications.

eSi-RISC
69 Views
CPU, Microcontroller, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
69 Views
GLOBALFOUNDARIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

M3000 Graphics Processor

The M3000 Graphics Processor is DMP's offering for high-power 3D graphics processing designed for energy-efficient performance. It features sophisticated graphics processing capabilities based on DMP’s proprietary Musashi 3D graphics architecture. This architecture is tailored to deliver state-of-the-art graphics performance, specifically aligning with complex computation requirements found in VR and AR applications, alongside other visual computing demands. The M3000 is crafted to achieve optimal Power, Performance, and Area (PPA) metrics, supporting OpenGL ES 3.0 to ensure compatibility with cutting-edge visual processing standards. Its adaptability stems from a flexible shader cluster architecture that can be adjusted to meet varying client needs for performance and size. This flexibility extends to its implementational capacity across diverse devices such as ASICs, ASSPs, SoCs, and more. Technically, the M3000 supports outputs at resolutions reaching 4k x 2k, with interfaces like AXI 3/4 and APB for communication, suited for numerous applications ranging from IoT devices to automotive infotainment systems. This processor redefines efficiency and power in graphics acceleration, rendered for seamless integration into modern HMI systems.

Digital Media Professionals Inc. (DMP Inc.)
67 Views
2D / 3D, ADPCM, GPU, Multiprocessor / DSP
View Details
Chatting with Volt