All IPs > Processor > Vision Processor
Vision processors are a specialized subset of semiconductor IPs designed to efficiently handle and process visual data. These processors are pivotal in applications that require intensive image analysis and computer vision capabilities, such as artificial intelligence, augmented reality, virtual reality, and autonomous systems. The primary purpose of vision processor IPs is to accelerate the performance of vision processing tasks while minimizing power consumption and maximizing throughput.
In the world of semiconductor IP, vision processors stand out due to their ability to integrate advanced functionalities such as object recognition, image stabilization, and real-time analytics. These processors often leverage parallel processing, machine learning algorithms, and specialized hardware accelerators to perform complex visual computations efficiently. As a result, products ranging from high-end smartphones to advanced driver-assistance systems (ADAS) and industrial robots benefit from improved visual understanding and processing capabilities.
The semiconductor IPs for vision processors can be found in a wide array of products. In consumer electronics, they enhance the capabilities of cameras, enabling features like face and gesture recognition. In the automotive industry, vision processors are crucial for delivering real-time data processing needed for safety systems and autonomous navigation. Additionally, in sectors such as healthcare and manufacturing, vision processor IPs facilitate advanced imaging and diagnostic tools, improving both precision and efficiency.
As technology advances, the demand for vision processor IPs continues to grow. Developers and designers seek IPs that offer scalable architectures and can be customized to meet specific application requirements. By providing enhanced performance and reducing development time, vision processor semiconductor IPs are integral to pushing the boundaries of what's possible with visual data processing and expanding the capabilities of next-generation products.
The Akida Neural Processor is a sophisticated AI processing unit designed to handle complex neural network tasks with unmatched precision and efficiency. Utilizing an event-based processing model, Akida exploits data sparsity to minimize operations and hence decrease power usage significantly while enhancing throughput. This processor is built around a mesh network interconnect, with each node equipped with configurable Neural Network Engines that can handle convolutional and fully connected neural networks. With these capabilities, Akida can process data at the edge, maintaining high-speed, low-latency responses ideal for real-time applications. Akida maintains seamless functionality in diverse use cases, from predictive maintenance to streaming analytics in sensors. By supporting on-chip learning and providing strong privacy controls, this processor ensures data security by reducing cloud data exchanges, making it a trusted component for sensitive applications.
The 2nd Generation Akida processor introduces groundbreaking enhancements to BrainChip's neuromorphic processing platform, particularly ideal for intricate network models. It integrates eight-bit weight and activation support, improving energy efficiency and computational performance without enlarging model size. By supporting an extensive application set, Akida 2nd Generation addresses diverse Edge AI needs untethered from cloud dependencies. Notably, Akida 2nd Generation incorporates Temporal Event-Based Neural Nets (TENNs) and Vision Transformers, facilitating robust tracking through high-speed vision and audio processing. Its built-in support for on-chip learning further optimizes AI efficiency by reducing reliance on cloud training. This versatile processor fits perfectly for spatio-temporal applications across industrial, automotive, and healthcare sectors. Developers gain from its Configurable IP Platform, which allows seamless scalability across multiple use cases. The Akida ecosystem, including MetaTF, offers developers a strong foundation for integrating cutting-edge AI capabilities into Edge systems, ensuring secure and private data processing.
The KL730 AI SoC is equipped with a state-of-the-art third-generation reconfigurable NPU architecture, delivering up to 8 TOPS of computational power. This innovative architecture enhances computational efficiency, particularly with the latest CNN networks and transformer applications, while reducing DDR bandwidth demands. The KL730 excels in video processing, offering support for 4K 60FPS output and boasts capabilities like noise reduction, wide dynamic range, and low-light imaging. It is ideal for applications such as intelligent security, autonomous driving, and video conferencing.
Designed for high-performance applications, the Metis AIPU PCIe AI Accelerator Card employs four Metis AI Processing Units to deliver exceptional computational power. With its ability to reach up to 856 TOPS, this card is tailored for demanding vision applications, making it suitable for real-time processing of multi-channel video data. The PCIe form factor ensures easy integration into existing systems, while the customized software platform simplifies the deployment of neural networks for tasks like YOLO object detection. This accelerator card ensures scalability and efficiency, allowing developers to implement AI applications that are both powerful and cost-effective. The card’s architecture also takes advantage of RISC-V and Digital-In-Memory Computing technologies, bringing substantial improvements in speed and power efficiency.
The Origin E1 neural engines by Expedera redefine efficiency and customization for low-power AI solutions. Specially crafted for edge devices like home appliances and security cameras, these engines serve ultra-low power applications that demand continuous sensing capabilities. They minimize power consumption to as low as 10-20mW, keeping data secure and eliminating the need for external memory access. The advanced packet-based architecture enhances performance by facilitating parallel layer execution, thereby optimizing resource utilization. Designed to be a perfect fit for dedicated AI functions, Origin E1 is tailored to support specific neural networks efficiently while reducing silicon area and system costs. It supports various neural networks, from CNNs to RNNs, making it versatile for numerous applications. This engine is also one of the most power-efficient in the industry, boasting an impressive 18 TOPS per Watt. Origin E1 also offers a full TVM-based software stack for easy integration and performance optimization across customer platforms. It supports a wide array of data types and networks, ensuring flexibility and sustained power efficiency, averaging 80% utilization. This makes it a reliable choice for OEMs looking for high performance in always-sensing applications, offering a competitive edge in both power efficiency and security.
The Metis AIPU M.2 Accelerator Module is a cutting-edge AI processing unit designed to boost the performance of edge computing tasks. This module integrates seamlessly with innovative applications, offering a robust solution for inference at the edge. It excels in vision AI tasks with its dedicated 512MB LPDDR4x memory, providing the necessary storage for complex tasks. Offering unmatched energy efficiency, the Metis AIPU M.2 module is capable of delivering significant performance gains while maintaining minimal power consumption. At an accessible price point, this module opens up AI processing capabilities for a variety of applications. As an essential component of next-generation vision processing systems, it is ideal for industries seeking to implement AI technologies swiftly and effectively.
The Origin E8 NPUs represent Expedera's cutting-edge solution for environments demanding the utmost in processing power and efficiency. This high-performance core scales its TOPS capacity between 32 and 128 with single-core configurations, addressing complex AI tasks in automotive and data-centric operational settings. The E8’s architecture stands apart due to its capability to handle multiple concurrent tasks without any compromise in performance. This unit adopts Expedera's signature packet-based architecture for optimized parallel execution and resource management, removing the necessity for hardware-specific tweaks. The Origin E8 also supports high input resolutions up to 8K and integrates well across standard and custom neural networks, enhancing its utility in future-forward AI applications. Leveraging a flexible, scalable design, the E8 IP cores make use of an exhaustive software suite to augment AI deployment. Field-proven and already deployed in a multitude of consumer vehicles, Expedera's Origin E8 provides a robust, reliable choice for developers needing optimized AI inference performance, ideally suited for data centers and high-power automobile systems.
Akida IP stands as an advanced neuromorphic processor, emulating brain-like processing to efficiently handle sensor inputs at acquisition points. This digital processor offers superior performance, precision, and significant reductions in power usage. By facilitating localized AI/ML tasks, it decreases latency and enhances data privacy. Akida IP is built to infer and learn at the edge, offering highly customizable, event-based neural processing. The architecture of Akida IP is scalable and compact, supporting an extensive mesh network connection of up to 256 nodes. Each node includes four Neural Network Layer Engines (NPEs), configurable for convolutional and fully connected processes. By leveraging data sparsity, Akida optimizes operation reduction, making it a cost-effective solution for various edge AI applications. Including MetaTF support for model simulations, Akida IP brings a fully synthesizable RTL IP package compatible with standard EDA tools, emphasizing ease of integration and deployment. This enables developers to swiftly design, develop, and implement custom AI solutions with robust security and privacy protection.
The NaviSoC, a flagship product of ChipCraft, combines a GNSS receiver with an on-chip application processor, providing an all-in-one solution for high-precision navigation and timing applications. This product is designed to meet the rigorous demands of industries such as automotive, UAVs, and smart agriculture. One of its standout features is the ability to support all major global navigation satellite systems, offering versatile functionality for various professional uses. The NaviSoC is tailored for high efficiency, delivering performance that incorporates low power consumption with robust computational capabilities. Specifically tailored for next-generation applications, NaviSoC offers flexibility through its ability to be adapted for different tasks, making it a preferred choice for many industries. It integrates seamlessly into systems requiring precision and reliability, providing developers with a wide array of programmable peripherals and interfaces. The foundational design ethos of the NaviSoC revolves around minimizing power usage while ensuring high precision and accuracy, making it an ideal component for battery-powered and portable devices. Additionally, ChipCraft provides integrated software development tools and navigation firmware, ensuring that clients can capitalize on fast time-to-market for their products. The design of the NaviSoC takes a comprehensive approach, factoring in real-world application requirements such as temperature variation and environmental challenges, thus providing a resilient and adaptable product for diverse uses.
The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.
Origin E2 NPUs focus on delivering power and area efficiency, making them ideal for on-device AI applications in smartphones and edge nodes. These processing units support a wide range of neural networks, including video, audio, and text-based applications, all while maintaining impressive performance metrics. The unique packet-based architecture ensures effective performance with minimal latency and eliminates the need for hardware-specific optimizations. The E2 series offers customization options allowing it to fit specific application needs perfectly, with configurations supporting up to 20 TOPS. This flexibility represents significant design advancements that help increase processing efficiency without introducing latency penalties. Expedera's power-efficient design results in NPUs with industry-leading performance at 18 TOPS per Watt. Further augmenting the value of E2 NPUs is their ability to run multiple neural network types efficiently, including LLMs, CNNs, RNNs, and others. The IP is field-proven, deployed in over 10 million consumer devices, reinforcing its reliability and effectiveness in real-world applications. This makes the Origin E2 an excellent choice for companies aiming to enhance AI capabilities while managing power and area constraints effectively.
The MIPITM V-NLM-01 is a Non-Local Means (NLM) image noise reduction core, designed to enhance image quality by minimizing noise while preserving detail. This core is highly configurable, allowing users to customize the search window size and the number of bits per pixel, thereby tailoring the noise reduction process to specific application demands. Specially optimized for HDMI output resolutions of 2048x1080 and frame rates from 30 to 60 fps, the V-NLM-01 utilizes an efficient algorithmic approach to deliver natural and artifact-free images. Its parameterized implementation ensures adaptability across various image processing environments, making it essential for applications where high fidelity image quality is critical. The V-NLM-01 exemplifies VLSI Plus Ltd.'s prowess in developing specialized IP cores that significantly enhance video quality. Its capacity to effectively process high-definition video data makes it suitable for integration in a wide range of digital video platforms, ensuring optimal visual output.
The TimbreAI T3 is an ultra-low-power AI engine specifically designed for audio applications, providing optimal performance in noise reduction tasks for devices like headsets. Known for its energy efficiency, the T3 operates with less than 300 microwatts of power consumption, allowing it to support performance-hungry applications without requiring external memory. The innovative architecture of TimbreAI leverages a packet-based framework focusing on achieving superior power efficiency and customization to the specific requirements of audio neural networks. This tailored engineering ensures no alteration is needed in trained models to achieve the desired performance metrics, thereby establishing a new standard in energy-efficient AI deployments across audio-centric devices. Geared towards consumer electronics and wearables, the T3 extends the potential for battery life in TWS headsets and similar devices by significantly reducing power consumption. With its preconfiguration for handling common audio network functions, TimbreAI provides a seamless development environment for OEMs eager to integrate AI capabilities with minimal power and area overheads.
Cortus's Automotive AI Inference SoC is a breakthrough solution tailored for autonomous driving and advanced driver assistance systems. This SoC combines efficient image processing with AI inference capabilities, optimized for city infrastructure and mid-range vehicle markets. Built on a RISC-V architecture, the AI Inference SoC is capable of running specialized algorithms, akin to those in the Yolo series, for fast and accurate image recognition. Its low power consumption makes it suitable for embedded automotive applications requiring enhanced processing without compromising energy efficiency. This chip demonstrates its adequacy for Level 2 and Level 4 autonomous driving systems, providing a comprehensive AI-driven platform that enhances safety and operational capabilities in urban settings.
The Chimera GPNPU by Quadric is a versatile processor specifically designed to enhance machine learning inference tasks on a broad range of devices. It provides a seamless blend of traditional digital signal processing (DSP) and neural processing unit (NPU) capabilities, which allow it to handle complex ML networks alongside conventional C++ code. Designed with a focus on adaptability, the Chimera GPNPU architecture enables easy porting of various models and software application programming, making it a robust solution for rapidly evolving AI technologies. A key feature of the Chimera GPNPU is its scalable design, which extends from 1 to a remarkable 864 TOPs, catering to applications from standard to advanced high-performance requirements. This scalability is coupled with its ability to support a broad range of ML networks, such as classic backbones, vision transformers, and large language models, fulfilling various computational needs across industries. The Chimera GPNPU also excels in automotive applications, including ADAS and ECU systems, due to its ASIL-ready design. The processor's hybrid architecture merges Von Neumann and 2D SIMD matrix capabilities, promoting efficient execution of scalar, vector, and matrix operations. It boasts a deterministic execution pipeline and extensive customization options, including configurable instruction caches and local register memories that optimize memory usage and power efficiency. This design effectively reduces off-chip memory accesses, ensuring high performance while minimizing power consumption.
The KL630 AI SoC embodies next-generation AI chip technology with a pioneering NPU architecture. It uniquely supports Int4 precision and transformer networks, offering superb computational efficiency combined with low power consumption. Utilizing an ARM Cortex A5 CPU, it supports a range of AI frameworks and is built to handle scenarios from smart security to automotives, providing robust capability in both high and low light conditions.
The Origin E6 neural engines are built to push the boundaries of what's possible in edge AI applications. Supporting the latest in AI model innovations, such as generative AI and various traditional networks, the E6 scales from 16 to 32 TOPS, aimed at balancing performance, efficiency, and flexibility. This versatility is essential for high-demand applications in next-generation devices like smartphones, digital reality setups, and consumer electronics. Expedera’s E6 employs packet-based architecture, facilitating parallel execution that leads to optimal resource usage and eliminating the need for dedicated hardware optimizations. A standout feature of this IP is its ability to maintain up to 90% processor utilization even in complex multi-network environments, thus proving its robustness and adaptability. Crafted to fit various use cases precisely, E6 offers a comprehensive TVM-based software stack and is well-suited for tasks that require simultaneous running of numerous neural networks. This has been proven through its deployment in over 10 million consumer units. Its design effectively manages power and system resources, thus minimizing latency and maximizing throughput in demanding scenarios.
The xcore.ai platform by XMOS Semiconductor is a sophisticated and cost-effective solution aimed specifically at intelligent IoT applications. Harnessing a unique multi-threaded micro-architecture, xcore.ai provides superior low latency and highly predictable performance, tailored for diverse industrial needs. It is equipped with 16 logical cores divided across two multi-threaded processing tiles. These tiles come enhanced with 512 kB of SRAM and a vector unit supporting both integer and floating-point operations, allowing it to process both simple and complex computational demands efficiently. A key feature of the xcore.ai platform is its powerful interprocessor communication infrastructure, which enables seamless high-speed communication between processors, facilitating ultimate scalability across multiple systems on a chip. Within this homogeneous environment, developers can comfortably integrate DSP, AI/ML, control, and I/O functionalities, allowing the device to adapt to specific application requirements efficiently. Moreover, the software-defined architecture allows optimal configuration, reducing power consumption and achieving cost-effective intelligent solutions. The xcore.ai platform shows impressive DSP capabilities, thanks to its scalar pipeline that achieves up to 32-bit floating-point operations and peak performance rates of up to 1600 MFLOPS. AI/ML capabilities are also robust, with support for various bit vector operations, making the platform a strong contender for AI applications requiring homogeneous computing environments and exceptional operator integration.
The Matchstiq™ X40 by Epiq Solutions is a compact, high-performance software-defined radio (SDR) system designed to harness the power of AI and machine learning at the RF edge. Its small form factor makes it suitable for payloads with size, weight, and power constraints. The unit offers RF coverage up to 18GHz with an instantaneous bandwidth up to 450MHz, making it an excellent choice for demanding environments requiring advanced signal processing and direction finding. One of the standout features of the Matchstiq™ X40 is its integration of Nvidia's Orin NX for CPU/GPU operations and an AMD Zynq Ultrascale+ FPGA, allowing for sophisticated data processing capabilities directly at the point of RF capture. This combination offers enhanced performance for real-time signal analysis and machine learning implementations, making it suited for a variety of high-tech applications. The device supports a variety of input/output configurations, including 1 GbE, USB 3.0, and GPSDO, ensuring compatibility with numerous host systems. It offers dual configurations that support up to four receivers and two transmitters, along with options for phase-coherent multi-channel operations, thereby broadening its usability across different mission-critical tasks.
The Dynamic Neural Accelerator II by EdgeCortix is a pioneering neural network core that combines flexibility and efficiency to support a broad array of edge AI applications. Engineered with run-time reconfigurable interconnects, it facilitates exceptional parallelism and efficient data handling. The architecture supports both convolutional and transformer neural networks, offering optimal performance across varied AI use cases. This architecture vastly improves upon traditional IP cores by dynamically reconfiguring data paths, which significantly enhances parallel task execution and reduces memory bandwidth usage. By adopting this approach, the DNA-II boosts its processing capability while minimizing energy consumption, making it highly effective for edge AI applications that require high output with minimal power input. Furthermore, the DNA-II's adaptability enables it to tackle inefficiencies often seen in batching tasks across other IP ecosystems. The architecture ensures that high utilization and low power consumption are maintained across operations, profoundly impacting sectors relying on edge AI for real-time data processing and decision-making.
The AON1020 expands AI processing capabilities to encompass not only voice and audio recognition but also a variety of sensor applications. It leverages the power of the AONSens Neural Network cores, offering a comprehensive solution that integrates Verilog RTL technology to support both ASIC and FPGA products. Key to the AON1020's appeal is its versatility in addressing various sensor data, such as human activity detection. This makes it indispensable in applications requiring nuanced responses to environmental inputs, from motion to gesture awareness. It deploys these capabilities while minimizing energy demands, aligning perfectly with the needs of battery-operated and wearable devices. By executing real-time analytics on device-stored data, the AON1020 ensures high accuracy in environments fraught with noise and user variability. Its architecture allows it to detect multiple commands simultaneously, enhancing device interaction while maintaining low power consumption. Thus, the AON1020 is not only an innovator in sensor data interaction but also a leader in ensuring extended device functionality without compromising energy efficiency or processing accuracy.
ZIA Stereo Vision (SV) represents DMP's cutting-edge depth sensing solution, engineered to offer high-precision stereo vision for various AI applications. It's designed to process stereo images for advanced depth mapping, utilizing 4K inputs to facilitate distance estimation via stereo matching algorithms like Semi-Global Matching (SGM). Through this technique, ZIA SV ensures that distance information is extracted accurately, a critical capability for applications like autonomous mobile robots or advanced imaging systems. Pre- and post-processing optimization provides the ZIA SV with the tools necessary to refine depth estimates and ensure high accuracy. It supports 8-bit greyscale inputs and outputs a disparity map with an accuracy up to 0.8%, provided by advanced filtering techniques that enhance precision while maintaining a compact form factor crucial in embedded systems. This IP core integrates smoothly into systems requiring reliable depth measurement, utilizing efficient AMBA AXI interfaces for easy integration into diverse applications. With capabilities to support a wide range of hardware platforms and favorable performance-to-size ratios, the ZIA Stereo Vision core embodies DMP's philosophy of compact, high-performance solutions for smarter decision-making in machine vision applications.
The Polar ID Biometric Security System represents a major innovation in smartphone security, offering a simplified yet highly secure face unlock solution. Unlike traditional systems, Polar ID uses breakthrough meta-optic technology to capture the unique 'polarization signature' of a face, enabling it to detect and prevent spoofing attempts with exceptional accuracy. This system provides more than 10 times the resolution of existing facial authentication solutions, functioning reliably under various light conditions, from bright daylight to complete darkness. It achieves this with a single low-profile near-infrared polarization camera and a 940nm illumination source, eliminating the need for bulky and expensive optical modules. Furthermore, the Polar ID not only reduces the required footprint of the technology, allowing it to fit into more compact form factors, but it also lowers costs, making secure face recognition accessible to a broader range of devices. This advancement in biometric technology is particularly valuable for mobile and consumer electronics, offering enhanced security without sacrificing convenience. The Polar ID sets a new benchmark for mobile security solutions with its unique combination of size, security, and cost-efficiency.
The eSi-3264 processor core provides advanced DSP functionality within a 32/64-bit architecture, enhanced by Single Instruction, Multiple Data (SIMD) operations. This high-performance CPU is crafted to excel in tasks demanding significant digital signal processing power, such as audio processing or motion control applications. It incorporates advanced SIMD DSP extensions and floating point support, optimizing the core for parallel data processing. The architecture supplies options for extensive custom configurations including instruction and data caches to tailor performance to the specific demands of high-speed and low-power operations. The eSi-3264's hardware debug capabilities combined with its versatile pipeline make it an ideal match for high-precision computing environments where performance and efficiency are crucial. Its ability to handle complex arithmetic operations efficiently with minimal silicon area further cements its position as a leading solution in DSP-focused applications.
Hanguang 800 is a sophisticated AI acceleration chip tailored for demanding neural network tasks. Developed with T-Head's cutting-edge technology, it excels in delivering high throughput for deep learning workloads. This chip employs a robust architecture optimized for AI computations, providing unprecedented performance improvements in neural network execution. It's particularly suited for scenarios requiring large-scale AI processing, such as image recognition and natural language processing. The chip's design facilitates the rapid conversion of high complex AI models into real-time applications, enabling enterprises to harness the full potential of AI in their operations.
The Spiking Neural Processor T1 by Innatera is a revolutionary microcontroller designed to handle sensory processing with extreme efficiency. This processor is specifically crafted to operate at ultra-low power levels, below 1 milliwatt, yet it delivers exceptional performance in pattern recognition tasks right at the sensor edge. Utilizing a neuromorphic architecture, it processes sensor data in real time to identify patterns such as audio signals or movements, significantly outperforming traditional processing methods in both speed and power consumption. Engineered to function in always-on operation modes, this microcontroller is critical for applications where maintaining continuous operation is essential. Its design offloads processing tasks from the main application processor, allowing for dedicated computation of sensor data. This includes conditioning, filtering, and classification tasks, ensuring they are carried out efficiently within the strictest power limits. With its ability to be integrated with various sensors, the Spiking Neural Processor T1 empowers devices to achieve advanced functionalities such as presence detection, touch-free interfaces, and active monitoring in wearable devices. This product supports a comprehensive range of applications through its innovative approach to sensor data handling, leveraging the unique capabilities of spiking neural networks to drive cognitive processing in less power-intensive environments.
The Yitian 710 processor is a flagship Arm server chip developed by T-Head. It utilizes advanced architecture to deliver exceptional performance and bandwidth, supporting the latest Armv9 instruction set. Constructed with a 2.5D packaging, the processor integrates two dies, boasting a staggering 60 billion transistors. Designed for high-efficiency computing, it includes 128 high-performance Armv9 CPU cores. Each core encompasses a 64KB level one instruction cache, a 64KB level one data cache, and a shared 1MB level two cache. This architecture supports extensive on-chip memory including a 128MB system cache, ensuring rapid data access and processing.
aiWare represents a specialized hardware IP core designed for optimizing neural network performance in automotive AI applications. This neural processing unit (NPU) delivers exceptional efficiency for a spectrum of AI workloads, crucial for powering automated driving systems. Its design is focused on scalability and versatility, supporting applications ranging from L2 regulatory tasks to complex multi-sensor L3+ systems, ensuring flexibility to accommodate evolving technological needs. The aiWare hardware is integrated with advanced features like industry-leading data bandwidth management and deterministic processing, ensuring high efficiency across diverse workloads. This makes it a reliable choice for automotive sectors striving for ASIL-B certification in safety-critical environments. aiWare's architecture utilizes patented dataflows to maximize performance while minimizing power consumption, critical in automotive scenarios where resource efficiency is paramount. Additionally, aiWare is supported by an innovative SDK that simplifies the development process through offline performance estimation and extensive integration tools. These capabilities reduce the dependency on low-level programming for neural network execution, streamlining development cycles and enhancing the adaptability of AI applications in automotive domains.
Designed for applications requiring exceptional energy efficiency and computational effectiveness, the Tianqiao-80 High-Efficiency 64-bit RISC-V CPU provides a robust solution for modern computing needs. Tailored for high-performance scenarios, this CPU core offers considerable advantages in both mobile and desktop environments, meeting the increasing demands for intelligent and responsive technology. The Tianqiao-80 features an innovative design that enhances processing efficiency, making it an ideal fit for applications such as artificial intelligence, automotive systems, and desktop computing. With its 64-bit architecture, the core efficiently manages resource-intensive tasks while maintaining competitive power usage, thus delivering enhanced operational effectiveness. This processor is also characterized by its ability to integrate seamlessly into diverse computing ecosystems, supporting high-performance interfaces and rapid data processing. Its architectural enhancements ensure that it meets the needs of modern computing, providing a reliable and versatile option for developers working across a wide spectrum of digital technologies.
FortiPKA-RISC-V is a specialized public key accelerator that enhances the performance of complex cryptographic operations while ensuring protection against SCA and FIA threats. Designed for embedded systems and IoT devices, this IP employs modular multiplication and eliminates the need for Montgomery domain transformations, thereby streamlining operations and optimizing area usage. It offers an extensive support for a variety of cryptographic algorithms, including RSA and ECC, providing a comprehensive cryptographic capability suitable for a range of security-intense applications. This product is engineered to enhance data protection while improving system performance, crucial for compliance with demanding industry standards.
The KL520 AI SoC by Kneron marked a significant breakthrough in edge AI technology, offering a well-rounded solution with notable power efficiency and performance. This chip can function as a host or as a supplementary co-processor to enable advanced AI features in diverse smart devices. It is highly compatible with a range of 3D sensor technologies and is perfectly suited for smart home innovations, facilitating long battery life and enhanced user control without reliance on external cloud services.
The SiFive Performance family of RISC-V processors targets maximum throughput and efficiency for environments including web servers and multimedia processing. These processors come in configurations ranging from three to six wide Out-of-Order (OoO) cores, with dedicated vector engines designed for AI workloads. By providing energy-efficiency without compromising on performance, the SiFive Performance Cores are tailored to meet the needs of diverse high-performance applications. These cores are scalable, offering configurations that can extend up to 256 cores. This scalability is essential for data centers and mobile infrastructures alike, where performance and efficiency are paramount. Key technical features include a six-wide out-of-order core architecture, RAS functionality, and a scalable core cluster. In datacenters and beyond, they facilitate the development of a diverse range of applications, including big data analytics and enterprise infrastructure solutions. SiFive's commitment to high-performance RISC-V processors caters to growing demands for performance and area-efficient application processors.
The AON1100 represents AONDevices' flagship in edge AI solutions aimed at voice and sensor applications. Its design philosophy centers on providing high accuracy combined with super low-power consumption. This chip shines when processing tasks such as voice commands, speaker identification, and sensor data integration. With a power rating of less than 260μW, the AON1100 maintains operational excellence even in environments with sub-0dB Signal-to-Noise Ratios. Its performance is highly appreciated in always-on devices, making it suitable for smart home applications, wearables, and automotive systems that demand real-time responsiveness and minimal energy draw. The AON1100 incorporates streamlined algorithms that enhance its sensor fusion capabilities, paving the way for smarter device contexts beyond traditional interactions. Its RISC-V support adds an additional layer of flexibility and compatibility with a wide range of applications, contributing significantly to the chip's adaptability and scalability across various domains.
The AI Inference Platform by SEMIFIVE is designed to accelerate artificial intelligence applications with optimized compute capabilities. It supports a variety of frameworks and offers robust integration with existing systems to streamline advanced data processing tasks. This platform is engineered to enhance performance efficiency, offering significant power savings and minimizing latency, thus addressing the demanding needs of AI-driven markets. Additionally, it boasts a modular design to accommodate updates and scalability.
The CTAccel Image Processor (CIP) on Intel Agilex FPGA offers a high-performance image processing solution that shifts workload from CPUs to FPGA technology, significantly enhancing data center efficiency. Using the Intel Agilex 7 FPGAs and SoCs F-Series, which are built on the 10 nm SuperFin process, the CIP can boost image processing speed by 5 to 20 times while reducing latency by the same measure. This enhancement is crucial for accommodating the explosive growth of image data in data centers due to smartphone proliferation and extensive use of cloud storage. The Agilex FPGA's advanced features include transceiver rates up to 58 Gbps, versatile DSP blocks supporting both fixed-point and floating-point operations, and high-performance cryptographic capabilities. These features facilitate substantial performance improvements in image transcoding, thumbnail generation, and image recognition tasks, reducing total cost of ownership by enabling data centers to maintain higher compute densities with lower operational costs. Moreover, the CIP's support for mainstream image processing software such as ImageMagick and OpenCV ensures seamless integration and deployment. The FPGA's capability for remote reconfiguration allows it to adapt swiftly to custom usage scenarios without server downtimes, enhancing maintenance and operational flexibility.
Kneron's KL530 introduces a modern heterogeneous AI chip design featuring a cutting-edge NPU architecture with support for INT4 precision. This chip stands out with its high computational efficiency and minimized power usage, making it ideal for a variety of AIoT and other applications. The KL530 utilizes an ARM Cortex M4 CPU, bringing forth powerful image processing and multimedia compression capabilities, while maintaining a low power footprint, thus fitting well with energy-conscious devices.
The ZIA DV700 Series represents a sophisticated AI processing solution, optimized for a broad range of data types such as images and videos. This product line offers a unique combination of high inference speed and precision, effectively balancing real-time processing, safety, and privacy requirements for edge-based AI systems. Offering support for deep neural network (DNN) models, this series allows seamless inference processing, essential for applications requiring device independence and reliable AI functioning. The core of the ZIA DV700 lies in its ability to handle inference tasks with multiple AI models simultaneously. Known for its Hardware architecture, the DV700 supports a full suite of models like MobileNet and Yolo v3, ensuring robust object detection and segmentation processes. Moreover, it provides a development environment, featuring SDKs and tools compatible with standard AI frameworks, enabling easy model integration and processing tasks. Distinct features of the DV700 include its use of FP16 precision for floating-point computations, retaining the high accuracy rates established during model training from PC or cloud-based servers. As a result, it is an ideal choice for autonomous driving applications and robotics systems, where AI interpretation efficiency directly correlates with system safety and dependability. By facilitating seamless AI inference without retraining models, the DV700 series marks a significant leap in inference processing efficiency and versatility for varied model structures.
Designed for high power efficiency, the KL720 AI SoC achieves a superior performance-per-watt ratio, positioning it as a leader in energy-efficient edge AI solutions. Built for use cases prioritizing processing power and reduced costs, it delivers outstanding capabilities for flagship devices. The KL720 is particularly well-suited for IP cameras, smart TVs, and AI glasses, accommodating high-resolution images and videos along with advanced 3D sensing and language processing tasks.
IMEC's Neuropixels Probe heralds a new era in neural recording, offering unprecedented resolution and sensitivity for neuroscientific explorations. This advanced probe facilitates the mapping of intricate neural networks, providing neuroscientists with a powerful tool to study brain function with extraordinary precision. Each probe is equipped with a dense array of recording sites, capable of capturing electrical activities from a large number of neurons simultaneously, thus unveiling the complexities of neural dynamics previously beyond reach. The Neuropixels Probe integrates cutting-edge technology with streamlined design, optimizing both data quality and user experience. Its architecture supports long-duration recordings with minimal interference, which is crucial for gaining a comprehensive understanding of neural patterns over time. This capability is vital for research areas like cognitive function, neurodegenerative diseases, and behavioral studies, where tracking changes in neural networks provides valuable insights into processes underlying health and disease. By harnessing state-of-the-art fabrication techniques, IMEC ensures that each probe delivers reliability and performance, meeting the diverse requirements of global research institutions. These probes are pivotal for breakthroughs in developing brain-computer interfaces and in advancing our understanding of neurological conditions, setting the stage for new therapies and treatments. Through the Neuropixels Probe, IMEC confirms its position as a leader in advancing technologies that open new vistas for neuroscientific research.
aiSim 5 stands out as the first ISO26262 ASIL-D certified simulator tool designed for validating ADAS (Advanced Driver-Assistance Systems) and AD (Automated Driving) technologies. This simulator offers an unparalleled environment for testing automated driving systems, utilizing a highly optimized sensor simulation framework which ensures robust performance in runtime. Its advanced rendering engine produces realistic and deterministic environments, bypassing limitations typically found in game engine simulators. This tool is pivotal for car manufacturers as it enhances the reliability and safety of automated driving solutions. aiSim 5 boasts a flexible architecture that integrates smoothly with existing toolchains, encouraging a reduction in costly real-road testing. It focuses significantly on multi-sensor simulation, supporting diverse weather conditions and complex driving scenarios, which are essential for developing adaptive driving systems. This simulation environment allows for high mileage tests, vital for understanding and improving the effectiveness of driving systems in various settings. Additionally, aiSim 5 supports the creation of digital twin 3D environments that replicate real-world locations accurately. This enables a high-fidelity simulation of operational design domains, from highways to urban settings. aiSim's capability to simulate adverse scenarios such as snowstorms or heavy rain showcases its comprehensive approach to ensuring that AD systems are tested under every possible real-world condition.
The Catalyst-GPU series heralds a new era of computing flexibility and power in the PXIe/CPCIe arena with its integration of NVIDIA Quadro T600 and T1000 GPUs. These modules offer outstanding compute acceleration and significant graphics capabilities, crucial for detailed signal processing and AI-driven tasks, making them indispensable for Modular Test & Measurement and Electronic Warfare applications. Boasting a significant performance gain, the Catalyst-GPU sets the stage for seamless, real-time processing abilities across a range of programming environments including MATLAB, Python, and popular AI frameworks. With multi-teraflop capabilities, the Catalyst-GPU ensures that even the most computationally demanding processes are handled with precision, thereby eliminating bottlenecks in data acquisition and computational tasks. Different models within this lineup are tailored to diverse application needs, maintaining ease of programmatic interaction and integration across both Windows and Linux platforms. This adaptability, coupled with a focus on cost-effective solutions, positions the Catalyst-GPU as a leading candidate for industries looking to enhance their AI application infrastructures.
The PB8051 Microcontroller Core, tailored for Xilinx FPGAs, exemplifies Roman-Jones's commitment to providing sophisticated microcontroller solutions. This core is an 8031-compatible implementation from the revered 8051 Microcontroller Family, designed to operate smoothly within the Xilinx ISE flow. It includes vital features such as two timers and a serial port, ensuring comprehensive functionality akin to the traditional 8031 hardware. Remarkably, the PB8051 core allows users to execute standard 8051 object code, making it perfect for applications relying on legacy software. Its compact design, utilizing approximately 300 slices, ensures efficient use of FPGA resources. The core is supported across numerous Xilinx FPGAs, from Spartan II onwards, making it a versatile choice for engineers looking to optimize their digital designs without extensive reprogramming. The PB8051 also includes powerful features for customization, offering configurations accessible through VHDL and Verilog, complete with simulation netlists and reference designs. This flexibility allows engineers to tailor the microcontroller to specific project requirements seamlessly. Furthermore, the core is available through a SignOnce IP license, which facilitates unlimited usage, thus adding a layer of practical value for developing diverse embedded solutions.
The InferX AI leverages a specialized architecture to deliver exceptional AI processing performance tailored to edge devices. This IP empowers devices with the capability to undertake complex machine learning tasks without depending on constant cloud connectivity. Its efficient infrastructure minimizes latency, reduces bandwidth usage, and conserves power, which is pivotal for battery-operated or off-grid applications. Built for scalability, InferX AI ensures seamless performance expansion to accommodate growing computational demands within dynamic AI environments. This versatility makes it attractive for a myriad of AI-driven innovations across different sectors.
The RayCore MC is a state-of-the-art graphics processing unit specifically designed for real-time ray tracing and path tracing. It offers a unique combination of speed and efficiency, catering to the demands of high-quality rendering in modern digital applications. This GPU leverages advanced technology to accelerate the rendering process, allowing for realistic lighting and textures in graphics without compromising performance. Incorporating Siliconarts' proprietary ray tracing algorithms, the RayCore MC excels in delivering lifelike visuals essential for industries such as gaming, virtual reality, and film production. Its low power consumption makes it an ideal choice for devices that require high graphics performance but are constrained by power limitations. Additionally, this GPU is crucial for applications that demand real-time interactive graphics, providing developers with the tools to craft immersive visual experiences. The RayCore MC's modular design supports seamless integration into various hardware architectures, making it versatile for a wide array of products. Its technology is a testament to Siliconarts' commitment to innovation, as it continues to set the standard for GPU performance in cutting-edge digital environments.
As an upgraded version of its predecessor, the Smart Vision Processing Platform - JH7110 enhances image and video processing capability with its high-performance RISC-V SoC. Featuring a four-core U74 processor, the JH7110 achieves a clock speed of up to 1.5GHz, enhancing the processing power significantly over the previous generation. The platform's core advancements include a more robust multimedia processing system integrated with a high-performance GPU. Further enriched by an array of high-speed interfaces, the JH7110 maintains coherence in processing large-scale data management tasks. Designed with a focus on reducing power consumption while maximizing output, it remains an excellent option for high-demand computing environments. Its applications extend into cloud computation, industrial control, and intelligent network applications, making it a versatile and future-proof solution. Supporting an extensive range of peripherals and communication interfaces, the JH7110 is suitable for numerous applications, from personal computing devices to advanced industrial machinery.
The Tianqiao-90 High-Performance RISC-V CPU Core stands out as a top-tier commercial-grade processor designed to meet the rigorous demands of contemporary high-performance computing environments. Its architecture is meticulously crafted to offer unparalleled flexibility, making it suitable for data-heavy centers, personal computing, mobile devices, and advanced machine learning applications. This CPU core demonstrates significant potential in handling intricate computational scenarios, thereby serving as a cornerstone for high-efficiency device design. Engineered with an advanced pipeline and comprehensive support for standard RISC-V extensions, the Tianqiao-90 excels in providing optimized performance and power efficiency. Its processing capabilities are enhanced by the integration of features such as out-of-order execution and multispecies instruction handling, ensuring a robust throughput. This makes it an excellent choice for power-intensive operations while maintaining notable energy efficiency. The core's modular design allows for scalability, making it adaptable to various multicore configurations that are essential in today's fast-paced technological landscape. Its deployment not only simplifies SoC development but also supports a wide array of applications ranging from network communications to AI-driven tasks, ensuring versatile implementation across multiple platforms.
The Vega eFPGA is a flexible programmable solution crafted to enhance SoC designs with substantial ease and efficiency. This IP is designed to offer multiple advantages such as increased performance, reduced costs, secure IP handling, and ease of integration. The Vega eFPGA boasts a versatile architecture allowing for tailored configurations to suit varying application requirements. This IP includes configurable tiles like CLB (Configurable Logic Blocks), BRAM (Block RAM), and DSP (Digital Signal Processing) units. The CLB part includes eight 6-input Lookup Tables that provide dual outputs, and also an optional configuration with a fast adder having a carry chain. The BRAM supports 36Kb dual-port memory and offers flexibility for different configurations, while the DSP component is designed for complex arithmetic functions with its 18x20 multipliers and a wide 64-bit accumulator. Focused on allowing easy system design and acceleration, Vega eFPGA ensures seamless integration and verification into any SoC design. It is backed by a robust EDA toolset and features that allow significant customization, making it adaptable to any semiconductor fabrication process. This flexibility and technological robustness places the Vega eFPGA as a standout choice for developing innovative and complex programmable logic solutions.
The Evo Gen 5 PCIe Card is designed to significantly enhance AI inferencing tasks by offloading intensive computations from the CPU. This accelerator card integrates seamlessly with existing infrastructure, providing a boost to generative AI applications, optimizing both performance and operational efficiency. Engineered with cutting-edge innovations, this PCIe card offers scalable solutions suitable for a wide range of AI workloads. Leveraging advanced AI ASIC technology, the Evo card facilitates high-throughput processing, ensuring enterprises can maximize their AI capabilities without extensive system overhauls.
The CTAccel Image Processor tailored for AWS takes advantage of FPGA technology to offer superior image processing capabilities on the cloud platform. Available as an Amazon Machine Image, the CIP for AWS offloads CPU tasks to FPGA, thereby boosting image processing speed by 10 times and reducing computational latency by a similar factor. This performance leap is particularly beneficial for cloud-based applications that demand fast, efficient image processing. By utilizing FPGA's reconfigurable architecture, the CIP for AWS enhances real-time processing tasks such as JPEG thumbnail generation, watermarking, and brightness-contrast adjustments. These functions are crucial in managing the vast image data that cloud services frequently encounter, optimizing both service delivery and resource allocation. The CTAccel solution's seamless integration within the AWS environment allows for immediate deployment and simplification of maintenance tasks. Users can reconfigure the FPGA remotely, enabling a flexible response to varying workloads without disrupting application services. This adaptability, combined with the CIP's high efficiency and low operational cost, makes it a compelling choice for enterprises relying on cloud infrastructure for high-data workloads.
The Smart Vision Processing Platform - JH7100 is a versatile RISC-V SoC designed to handle complex image and video processing tasks. Equipped with a dual-core U74, it shares a 2MB L2 cache, operating at speeds up to 1.2GHz, and supports Linux operating systems. This robust platform is purpose-built to meet the needs of real-time edge applications, where swift data processing and minimal latency are crucial. The JH7100 integrates a StarFive ISP compatible with various mainstream camera sensors, enabling seamless image capturing and processing. Equipped with built-in capabilities for H264, H265, and JPEG encoding, it delivers a powerful visual computing experience. Key to its performance is the integration of powerful Vision DSP and NNE components, which provide enhanced AI processing. This platform is ideal for applications in industrial intelligence and smart home devices, offering security and versatility in a compact form. Its low power consumption and high processing efficiency optimize it for use in public safety, industrial automation, and intelligent appliances.