BPI-F3 large model Demo has been put into github source code after V1.0.8,
you can experience the following installation:\
sudo apt-get update
sudo apt-get install inferllm-chatglm; sudo apt-get install inferllm-chatglm;
Now this is the model of ChatGLMTV2-6b, the model size is 3.6G, the performance is 0.7t/s in Chinese, 1.3T/s in English;
SpacemiT Key Stone™ K1 implements local operation of many mainstream miniaturized large models such as minicpm, qwen2, phi3, llama3, qwen1.5, chatglm, etc. K1 as the world’s first 8-core RISC-V AI CPU, without NPU, innovative implementation of the CPU to provide 2.0 TOPS universal AI computing power