Skip to content

memryx/MxAccl

Repository files navigation

MemryX Accl

MemryX SDK C++

MemryX Accl

The MxAccl repository provides the open-source code for both the mx_accl runtime library, mxa-manager daemon, and the acclBench benchmarking tool. These components enable seamless integration and performance measurement of C++ applications using the MemryX MX3 accelerator.

MxAccl Library

The mx_accl library is designed to efficiently handle multi-model and multi-input streams. It offers:

  • Auto-threading Mode: Where the library manages send and receive threads automatically, simplifying usage.
  • Manual-threading Mode: Where the user manually creates and manages threads, providing greater control and customization.

This architecture helps maximize the performance of the MX3’s pipelined dataflow system, while also allowing users to manage resources and threading as needed.

AcclBench Tool

The acclBench command line interface tool provides an easy way to benchmark model performance on the MX3 accelerator. It measures the latency and FPS of inference operations, capturing the performance from the host side, which includes driver and interface time. acclBench supports both single and multi-stream scenarios and is built using the high-performance MxAccl C++ API.

Full Documentation

For detailed information on using the mx_accl library and acclBench tool, please visit the MemryX Developer Hub:

Note: If you are looking for the Python API for the MemryX MX3 accelerator, please visit the Developer Hub for documentation and usage examples. This repository only contains the C++ version of the library.

Repository Overview

This repository contains the source code for the core mx_accl library and associated tools. These components are typically pre-built and packaged as memx-accl within the MemryX SDK.

Folder Description
mx_accl Core MxAccl runtime library code
mx_accl/pymodule Python bindings for the MxAccl library
mxa_manager MXA-Manager daemon and config files
tools Utilities like acclBench
misc Copies of the binary libmemx.so library, used for building Yocto packages separately

IMPORTANT: For most users, we highly recommend using the prebuilt memx-accl package provided by the MemryX SDK, as it simplifies development and ensures all dependencies are properly managed.

Recommended Installation: MemryX SDK

To simplify development and avoid building from source, install the memx-accl package through the MemryX SDK. The SDK includes precompiled libraries, drivers, and tools optimized for MemryX accelerators. Follow the installation guide for step-by-step instructions.

The MemryX Developer Hub provides comprehensive documentation, tutorials, and examples to get you started quickly.

The Runtime section of the DevHub also has deeper details about this MxAccl library.

Advanced Installation: Building from Source

For advanced users who prefer to build the MxAccl library from source, follow these instructions:

Step 0: Install memx-drivers

If you haven't already, install the MemryX drivers and runtime libraries by following the instructions in the MemryX SDK Installation Guide.

Step 1: Clone the repository

git clone https://github.com/memryx/MxAccl.git

Step 2: Build MxAccl

mkdir build && cd build

cmake ..
make -j$(nproc)

(Optional) Step 3: Build Python Bindings

# activate your Python venv with numpy, etc. installed
cd ../mx_accl/pymodule
mkdir build && cd build
cmake ..
make -j$(nproc)

Usage

MxAccl

The typical use of MxAccl is to integrate it directly into your C++ application to manage model inference on the MemryX MX3 accelerator. For complete documentation and detailed integration tutorials, visit the MxAccl Documentation and Tutorials Page.

mxa_manager

The mxa_manager daemon is responsible for managing the MX3 accelerator and its resources. See its documentation here.

acclBench

To measure the performance (latency and FPS) of a model on the MemryX MX3 accelerator, use the acclBench command line tool. Here's how to get started:

Step 1: Obtain a DFP Model File

You can download a precompiled MobileNet DFP file from the link below:

For more information on creating and using DFP files, refer to the Hello MXA Tutorial.

Step 2: Run acclBench

Once you have the DFP file and have successfully installed the MemryX SDK, drivers, and runtime libraries, navigate to the directory where you downloaded the MobileNet DFP file and run:

acclBench -d mobilenet.dfp -f 1000
Explanation of the Command
  • -d mobilenet.dfp: Specifies the DFP model file to be used for benchmarking.
  • -f 1000: Sets the number of frames for testing inference performance. The default is 1000 frames.

Pre/Post Plugins

For Onnx, Tensorflow, and TFLite pre/post plugins, refer to the MxUtils repository. These plugins are packaged separately to minimize dependencies for the core MxAccl library and are only required if pre/post models are used.

License

MxAccl is Free and open-source software under the MPL-2.0 License.

Third-Party

See the LICENSE.md files within each extern/ for asio, cpuinfo, pybind11, and spdlog licenses, all permissive open-source.

See Also

Enhance your experience with MemryX solutions by exploring the following resources:

  • Developer Hub: Access comprehensive documentation for MemryX hardware and software.
  • MemryX SDK Installation Guide: Learn how to set up essential tools and drivers to start using MemryX accelerators.
  • Tutorials: Follow detailed, step-by-step instructions for various use cases and applications.
  • Model Explorer: Discover and explore models that have been compiled and optimized for MemryX accelerators.
  • Examples: Explore a collection of end-to-end AI applications powered by MemryX hardware and software.

About

MxAccl: open-source code for both the MemryX C++ runtime library and the acclBench benchmarking tool. These components enable seamless integration and performance measurement of C++ applications using the MemryX MX3 accelerator.

Resources

License

Stars

Watchers

Forks

Contributors