Skip to content

直接推理llava-v1.5-7B模型有如下问题AttributeError: 'NoneType' object has no attribute 'image_seqlen' #5611

Closed
@Y-PanC

Description

Reminder

  • I have read the README and searched the existing issues.

System Info

您好!我是下载llava-v.15-7B模型后直接用api做推理,报AttributeError: 'NoneType' object has no attribute 'image_seqlen'错误。
这个问题困扰了我一周,希望得到您的指点。
具体细节如下
Traceback (most recent call last):
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in call
return await self.app(scope, receive, send)
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in call
await super().call(scope, receive, send)
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/starlette/applications.py", line 113, in call
await self.middleware_stack(scope, receive, send)
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/starlette/middleware/errors.py", line 187, in call
raise exc
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/starlette/middleware/errors.py", line 165, in call
await self.app(scope, receive, _send)
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/starlette/middleware/cors.py", line 85, in call
await self.app(scope, receive, send)
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
raise exc
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
await app(scope, receive, sender)
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/starlette/routing.py", line 715, in call
await self.middleware_stack(scope, receive, send)
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/starlette/routing.py", line 735, in app
await route.handle(scope, receive, send)
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
raise exc
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
await app(scope, receive, sender)
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/api/app.py", line 111, in create_chat_completion
return await create_chat_completion_response(request, chat_model)
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/api/chat.py", line 147, in create_chat_completion_response
responses = await chat_model.achat(
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/chat/chat_model.py", line 91, in achat
return await self.engine.chat(messages, system, tools, image, video, **input_kwargs)
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/chat/hf_engine.py", line 292, in chat
return await loop.run_in_executor(pool, self._chat, *input_args)
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/concurrent/futures/thread.py", line 52, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/chat/hf_engine.py", line 189, in _chat
gen_kwargs, prompt_length = HuggingfaceEngine._process_args(
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/chat/hf_engine.py", line 97, in _process_args
messages = template.mm_plugin.process_messages(
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/data/mm_plugin.py", line 237, in process_messages
image_seqlen = getattr(processor, "image_seqlen")
AttributeError: 'NoneType' object has no attribute 'image_seqlen'

Reproduction

我的API部署指令如下
API_PORT=8003 CUDA_VISIBLE_DEVICES=5 llamafactory-cli api /home/ubuntu/pqj/math/LLaMA-Factory/examples/inference/llava-v1.5-7b_api.yaml

llava-v1.5-7b_api.yaml如下
model_name_or_path: /mnt/ssd2/models/llava-v1.5-7b
template: llava

.py文件如下
from openai import OpenAI

import os
import base64
import json
import re
import shutil
from time import sleep

def encode_image(image_path):
if image_path.startswith("http"):
return image_path
with open(image_path, "rb") as image_file:
base64_image = base64.b64encode(image_file.read()).decode('utf-8')
# print(base64_image)
return f"data:image/jpeg;base64,{base64_image}"

client = OpenAI(api_key="0",base_url="http://0.0.0.0:8003/v1")
content = [{"type": "text", "text": "如图, 在四边形 $A B C D$ 中, 设 $\overrightarrow{A B}=\boldsymbol{a}, \overrightarrow{A D}=\boldsymbol{b}, \overrightarrow{B C}=\boldsymbol{c}$, 则 $\overrightarrow{D C}$ 等于\n\nA. $\boldsymbol{a}-\boldsymbol{b}+\boldsymbol{c}$\nB. $\boldsymbol{b}-(\boldsymbol{a}+\boldsymbol{c})$\nC. $\boldsymbol{a}+\boldsymbol{b}+\boldsymbol{c}$\nD. $\boldsymbol{b}-\boldsymbol{a}+\boldsymbol{c}$"}]

image_path = '/home/ubuntu/pqj/math/data/Test_Images/9017.jpg'
content.append({
"type" : "image_url",
"image_url": {
"url": encode_image(image_path)
}
})

messages = [{"role": "user", "content": content}]

print(messages)

result = client.chat.completions.create(messages=messages, model="/mnt/ssd2/models/llava-v1.5-7b")
print(result.choices[0].message)

Expected behavior

No response

Others

No response

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    solvedThis problem has been already solved

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions