Find IP Sell IP AI Assistant Chip Talk About Us
Log In

Skymizer

Based in Taiwan, Skymizer is a pioneering company dedicated to fostering advancements in artificial intelligence-powered semiconductor technologies. The company's core mission is centered around enhancing the capabilities of on-device AI inference, specifically targeting AI-on-chip development environments. By offering innovative solutions that integrate compiler and virtual machine technologies, Skymizer empowers design houses to automate AI application creation, enhancing both performance and accuracy. Skymizer excels in delivering a highly efficient software infrastructure for AI systems design, focusing on streamlining the development process. With an emphasis on cross-platform compatibility, their offerings facilitate AI application deployment across varied hardware systems. This versatility is coupled with machine learning optimization techniques, ensuring the seamless integration and execution of AI models, regardless of platform constraints. The company not only provides comprehensive development environments but also champions the latest in AI system technology. With proprietary compiler tools and runtime solutions, Skymizer ensures AI-on-chip systems achieve superior processing speeds and enhanced energy efficiency. Their customer-centric approach provides tailored solutions, including reference designs and turn-key options, to meet the diverse needs of their clientele in a rapidly evolving technological landscape. Read more

Is this your business? Claim it to manage your IP and profile

4
IPs available

Calibrator for AI-on-Chips

Calibrator for AI-on-Chips is designed to enhance precision and performance in AI System-on-Chips using post-training quantization techniques. By employing architecture-aware algorithms, this calibrator maintains high accuracy levels even in fixed-point architectures such as INT8. It supports heterogeneous multicore devices, ensuring compatibility with various processing engines and bit-width configurations. The product utilizes a sophisticated precision simulator for accurate quantization across data paths, leveraging hardware-specific controls for precise calibration. The included calibration workflow efficiently produces a quantization table that seamlessly integrates with compilers to fine-tune model precision without altering neural network topologies. Supporting interoperability with popular frameworks, the Calibrator for AI-on-Chips enhances performance without necessitating retraining. Users benefit from expedited quantization processes, which reduce the precision drop to minimal levels, thus ensuring high-quality outputs even for complex AI models.

Skymizer
52 Views
HHGrace, Renesas
65nm, 250nm
AI Processor, Cryptography Cores, DDR, Processor Core Dependent, Processor Core Independent, Security Protocol Accelerators
View Details

ONNC Compiler

The ONNC Compiler is an advanced suite of C++ libraries and tools designed to enhance the development of compilers for deep learning accelerators (DLAs). Targeting diverse system-on-chip (SoC) architectures, from single-core systems to complex heterogeneous setups, it transforms neural networks into machine instructions for various processing elements. This versatility allows for seamless integration across different SoC architectures with varied memory and bus configurations. Supported by major deep learning frameworks like PyTorch and TensorFlow, the ONNC Compiler provides significant flexibility in handling multiple machine instruction streams concurrently. Utilizing both single and multiple backend modes, it caters to a broad spectrum of IC designs. The comprehensive compiler flow, divided into frontend, middle-end, and backend stages, ensures optimal performance while minimizing the memory footprint through strategic data bandwidth and resource scheduling. Enhancing AI SoCs with a robust hardware/software co-optimization approach, the ONNC Compiler employs advanced strategies like software pipelining and DMA allocation. It effectively manages complex memory hierarchies and bus systems, ensuring efficient data movement and reducing overhead. This results in substantial RAM savings and higher processing efficacy in AI-centric systems.

Skymizer
52 Views
GLOBALFOUNDARIES, TSMC
10nm, 40nm
AI Processor, AMBA AHB / APB/ AXI, CPU, Processor Core Dependent, Processor Core Independent, Security Protocol Accelerators
View Details

Forest Runtime

Forest Runtime is a highly adaptable runtime solution for executing compiled neural network models across various hardware platforms. It is characterized by its retargetable modular architecture that supports a wide range of applications, from datacenter-oriented tasks to mobile and TinyML deployments. This flexibility allows users to optimize AI model execution according to specific hardware capabilities and requirements. By leveraging "hot batching" technology, Forest Runtime facilitates dynamic adjustment of batch sizes and input shapes, enhancing throughput and minimizing response times without compiler transformation. This approach significantly boosts execution speed, especially for modern models like transformers and BERT, ensuring maximum efficiency in data center environments. Forest Runtime also excels in scaling capabilities by enabling model fusion and efficient resource management. These features minimize CPU and NPU synchronization overhead, maximizing hardware utilization for applications requiring multiple processing units and synchronous operation across various accelerator cards.

Skymizer
48 Views
Intel Foundry, UMC
16nm, 28nm
AI Processor, Multiprocessor / DSP, Processor Core Independent
View Details

EdgeThought

EdgeThought is a groundbreaking solution designed to revolutionize on-device large language model (LLM) inferencing. This product addresses the increasing demand for advanced LLM capabilities directly on devices, providing a high-performance, low-cost alternative to traditional cloud-based solutions. By maximizing memory bandwidth utilization and minimizing response times, EdgeThought ensures efficient processing with minimal hardware requirements. The product supports a diverse range of modern neural networks, including models such as LLaMA2 and Mistral, making it versatile across various applications. With a focus on programmability and model flexibility, EdgeThought is equipped with a specialized instruction set designed for LLM tasks. This ensures compatibility with popular frameworks and tools, enhancing ease of integration into existing AI systems. EdgeThought's scalability is further evidenced by its ecosystem readiness. It seamlessly integrates with leading frameworks like HuggingFace Transformers and Nvidia Triton Inference Server, as well as offering fine-tuning capabilities through toolkits like QLoRA and LangChain. This feature-rich product positions itself as an essential tool for AI developers aiming to enhance on-device inferencing capabilities.

Skymizer
11 Views
GLOBALFOUNDARIES, UMC
7nm, 16nm
AI Processor, Vision Processor
View Details
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to evaluate IP, download trial versions and datasheets, and manage your evaluation workflow!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Review added

Claim Your Business

Please enter your work email and we'll send you a link to claim your business.

Review added

Claim Email Sent

Please check your email for a link you can use to claim this business profile.

Chat to Volt about this page

Chatting with Volt