本地环境使用ollama千问大模型

安装ollama和千问

ollama官网

通过api调用

import requests
import json

url = "http://localhost:11434/api/chat"

payload = {
    "model": "qwen",
    "messages": [
        {"role": "user", "content": "why is the sky blue?"}
    ]
}
headers = {
    "Content-Type": "application/json"
}

response = requests.request("POST", url, headers=headers, data=json.dumps(payload))
results = response.content.decode('utf-8').split('\n')
content_parts = []
for result in results:
    if result == '':
        continue
    json_obj = json.loads(result)
    msg = json_obj["message"]["content"]
    content_parts.append(msg)
full_content = "".join(content_parts)

print(full_content)

封装ollama为后台服务

ollama run qwen

存这一行为ollama.bat,通过nssm安装这个bat文件为后台服务,使ollama常驻后台。


作者:spike

分类: Python

创作时间:2024-04-06

更新时间:2024-04-06

联系方式放在中括号之中例如[[email protected]],回复评论在开头加上标号例如:#1
Copyright © Your Website 2024
介绍 赞助 Github Rss Sitemap 免责声明 联系