Banana Pi BPI-F3 for AI

The K1 chip launched by SpacemiT . Based on the general CPU, combined with a small amount of DSA customization (in line with the RISC-V IME extension framework) and a large number of micro-architecture innovations, the inclusive general CPU reuse the results of the open source ecosystem to the maximum extent, under the premise of compatible open source ecology, to provide TOPS-level AI computing power, accelerate edge AI applications. This means that K1 chips can avoid low-quality repetitive development and take full advantage of the richness and flexibility of open source resources for rapid deployment with small inputs.

Based on the AI technology route of SpacemiT, the K1 chip can support the deployment of a large number of open source models in a short time through the open software stack in the way of lightweight plug-ins. At present, the optimization deployment of about 150 models including image classification, image segmentation, target detection, speech recognition, natural language understanding and other scenarios has been verified. Open source model repositories such as timm, onnx modelzoo, ppl modelzoo and others have close to 100% support pass rates, and theoretically we can support all public onnx models.

The advanced space-time plug-in can be used as follows:

○C/C++

C++

#include <onnxruntime_cxx_api.h>
#include "spacemit_ort_env.h"
std::string net_param_path = "your_onnx_model.onnx";
Ort::Env env(ORT_LOGGING_LEVEL_WARNING, "ort-demo");
Ort::SessionOptions session_options;
 // 可选加载SpaceMIT环境初始化加载专属EP
Ort::SessionOptionsSpaceMITEnvInit(session_options);
Ort::Session session(env, net_param_path, session_options);
// 加载输入
// .......
auto output_tensors = session.Run(Ort::RunOptions{nullptr}, input_node_names.data(), &input_tensor, input_count,
                                    output_node_names.data(), output_count);

○python

Python
import onnxruntime as ort
import numpy as np
import spacemit_ort
net_param_path = "resnet18.q.onnx"
session = ort.InferenceSession(net_param_path, providers=["SpaceMITExecutionProvider"])
input_tensor = np.ones((1, 3, 224, 224), dtype=np.float32)
outputs = session.run(None, {"data": input_tensor})