📦 Aliyun Qwen Multimodal Embedding — 技能工具

v1.0.0

[自动翻译] Use when multimodal embeddings are needed from Alibaba Cloud Model Studio models such as `qwen3-vl-embedding` for image, video, and text retrieval, cr...

0· 48·0 当前·0 累计
by @cinience·MIT-0
下载技能包
License
MIT-0
最后更新
2026/4/1
0
安全扫描
VirusTotal
无害
查看报告
OpenClaw
可疑
high confidence
The skill's code and runtime behavior are simple and offline (it only writes a JSON payload), but the README asks for an API key and pairing with external vector stores that the code never uses — an incoherence worth clarifying before install.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/4/1

- Initial release of aliyun-qwen-multimodal-embedding skill. - Supports generation of multimodal embeddings (text, image, video) using Alibaba Cloud Model Studio models for retrieval, search, clustering, or offline vectorization. - Provides normalized embedding.multimodal interface with customizable model, input types, and output dimensions. - Includes validation and reproducibility steps, plus guidance for pairing with vector stores. - Documents exact supported model names and selection guidance.

无害

安装命令

点击复制
官方npx clawhub@latest install aliyun-qwen-multimodal-embedding
🇨🇳 镜像加速npx clawhub@latest install aliyun-qwen-multimodal-embedding --registry https://cn.longxiaskill.com

技能文档

Category: provider

# Model Studio Multimodal Embedding

Validation

mkdir -p output/aliyun-qwen-multimodal-embedding
python -m py_compile skills/ai/search/aliyun-qwen-multimodal-embedding/scripts/prepare_multimodal_embedding_request.py && echo "py_compile_ok" > output/aliyun-qwen-multimodal-embedding/validate.txt

Pass criteria: command exits 0 and output/aliyun-qwen-multimodal-embedding/validate.txt is generated.

Output And Evidence

  • Save normalized request payloads, selected dimensions, and sample input references under output/aliyun-qwen-multimodal-embedding/.
  • Record the exact model, modality mix, and output vector dimension for reproducibility.

Use this skill when the task needs text, image, or video embeddings from Model Studio for retrieval or similarity workflows.

Critical model names

Use one of these exact model strings as needed:

  • qwen3-vl-embedding
  • qwen2.5-vl-embedding
  • tongyi-embedding-vision-plus-2026-03-06

Selection guidance:

  • Prefer qwen3-vl-embedding for the newest multimodal embedding path.
  • Use qwen2.5-vl-embedding when you need compatibility with an older deployed pipeline.

Prerequisites

  • Set DASHSCOPE_API_KEY in your environment, or add dashscope_api_key to ~/.alibabacloud/credentials.
  • Pair this skill with a vector store such as DashVector, OpenSearch, or Milvus when building retrieval systems.

Normalized interface (embedding.multimodal)

Request

  • model (string, optional): default qwen3-vl-embedding
  • texts (array, optional)
  • images (array, optional): public URLs or local paths uploaded by your client layer
  • videos (array, optional): public URLs where supported
  • dimension (int, optional): e.g. 2560, 2048, 1536, 1024, 768, 512, 256 for qwen3-vl-embedding

Response

  • embeddings (array)
  • dimension (int)
  • usage (object, optional)
  • Quick start

    python skills/ai/search/aliyun-qwen-multimodal-embedding/scripts/prepare_multimodal_embedding_request.py \
      --text "A cat sitting on a red chair" \
      --image "https://example.com/cat.jpg" \
      --dimension 1024
    

    Operational guidance

    • Keep input.contents as an array; malformed shapes are a common 400 cause.
    • Pin the output dimension to match your index schema before writing vectors.
    • Use the same model and dimension across one vector index to avoid mixed-vector incompatibility.
    • For large image or video batches, stage files in object storage and reference stable URLs.

    Output location

    • Default output: output/aliyun-qwen-multimodal-embedding/request.json
    • Override base dir with OUTPUT_DIR.

    References

    • references/sources.md
    数据来源:ClawHub ↗ · 中文优化:龙虾技能库