app icon
sakura-ai-engine
0.0.4

Sakura AI Engine model provider for Dify via the OpenAI-compatible API.

tplog/sakura-ai-engine201 installs

Sakura AI Engine – Dify Plugin

日本語

A Dify model provider plugin that connects Sakura AI Engine to Dify via the OpenAI-compatible API.

Features

  • LLM & Embedding — supports both large language models and text embedding models
  • Predefined models — ready to use out of the box
  • Custom models — add any model available on Sakura AI Engine
  • Streaming — full streaming support with reasoning content () handling
  • OpenAI-compatible — standard API format, no extra adapters needed

Supported Models

LLM

ModelContext
gpt-oss-120b128K
Qwen3-Coder-480B-A35B-Instruct-FP8128K
Qwen3-Coder-30B-A3B-Instruct128K
llm-jp-3.1-8x13b-instruct4128K
Preview models
ModelContext
Phi-4-mini-instruct (CPU)16K
Phi-4-multimodal-instruct128K
Qwen3-0.6B (CPU)32K
Qwen3-VL-30B-A3B-Instruct128K

Text Embedding

ModelDimensionsContext
multilingual-e5-large1024512
Qwen3-Embedding-4B-FP16 (Preview)25608192

Quick Start

1. Install

Download the file from Releases and install it in your Dify instance.

2. Configure

  1. Go to Settings → Model Providers in Dify
  2. Find Sakura AI Engine and click configure
  3. Enter your API key in format

3. Use

Select any predefined model in your Dify app, or add a custom model by entering its model ID.

Configuration

Enter your API key in format — that's all you need. The plugin handles everything else automatically.

Project Structure

Requirements

  • Dify ≥ 1.11.1
  • Python 3.12
  • Sakura AI Engine API key

Security

  • Never commit your API key to the repository
  • Store credentials only in Dify's credential fields
  • Rotate the key immediately if exposed

License

MIT

CATEGORY
Model
VERSION
0.0.4
tplog·03/24/2026 02:09 AM
REQUIREMENTS
LLM invocation
Maximum memory
256MB