Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] cogvlm2支持的问题 #2430

Open
1 of 3 tasks
tdf1995 opened this issue Sep 6, 2024 · 1 comment
Open
1 of 3 tasks

[Bug] cogvlm2支持的问题 #2430

tdf1995 opened this issue Sep 6, 2024 · 1 comment
Assignees

Comments

@tdf1995
Copy link

tdf1995 commented Sep 6, 2024

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

2024-09-06 02:46:34,547 - lmdeploy - WARNING - Fallback to pytorch engine because /root/.cache/huggingface/hub/models--THUDM--cogvlm2-llama3-chinese-chat-19B/snapshots/9ab2ccb0e7b7db52e6ff60c50f12082af3ce12ad not supported by turbomind engine.
2024-09-06 02:46:37,662 - lmdeploy - ERROR - matching vision model: CogVLMVisionModel failed
Traceback (most recent call last):
File "/usr/bin/lmdeploy", line 8, in
sys.exit(run())
File "/usr/lib/python3.10/site-packages/lmdeploy/cli/entrypoint.py", line 36, in run
args.run(args)
File "/usr/lib/python3.10/site-packages/lmdeploy/cli/serve.py", line 298, in api_server
run_api_server(args.model_path,
File "/usr/lib/python3.10/site-packages/lmdeploy/serve/openai/api_server.py", line 1285, in serve
VariableInterface.async_engine = pipeline_class(
File "/usr/lib/python3.10/site-packages/lmdeploy/serve/vl_async_engine.py", line 21, in init
self.vl_encoder = ImageEncoder(model_path,
File "/usr/lib/python3.10/site-packages/lmdeploy/vl/engine.py", line 85, in init
self.model = load_vl_model(model_path, backend_config=backend_config)
File "/usr/lib/python3.10/site-packages/lmdeploy/vl/model/builder.py", line 55, in load_vl_model
return module(**kwargs)
File "/usr/lib/python3.10/site-packages/lmdeploy/vl/model/base.py", line 31, in init
self.build_model()
File "/usr/lib/python3.10/site-packages/lmdeploy/vl/model/cogvlm.py", line 48, in build_model
device_map = infer_auto_device_map(
File "/usr/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 1238, in infer_auto_device_map
tied_parameters = find_tied_parameters(model)
File "/usr/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 708, in find_tied_parameters
all_named_parameters = {name: param for name, param in _get_named_parameters(model, remove_duplicate=False)}
File "/usr/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 708, in
all_named_parameters = {name: param for name, param in _get_named_parameters(model, remove_duplicate=False)}
File "/usr/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 667, in _get_named_parameters
members = module._parameters.items()
AttributeError: 'NoneType' object has no attribute '_parameters'

Reproduction

lmdeploy serve api_server THUDM/cogvlm2-llama3-chinese-chat-19B --server-port 23333

Environment

sys.platform: linux
Python: 3.10.5 (main, Mar 28 2024, 16:07:02) [GCC 7.5.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0,1,2,3: Tesla T4
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.8, V11.8.89
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 2.3.0+cu118
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architect                                                                                                                                                                ure applications
  - Intel(R) MKL-DNN v3.3.6 (Git Hash 86e6af5974177e513fd3fee58425e1063e7f1361)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 11.8
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-g                                                                                                                                                                encode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_                                                                                                                                                                80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_90,cod                                                                                                                                                                e=sm_90
  - CuDNN 8.7
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.8, CUDNN_VERSION=8.7.0, CXX_COMP                                                                                                                                                                ILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvi                                                                                                                                                                sibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -D                                                                                                                                                                USE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wext                                                                                                                                                                ra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-i                                                                                                                                                                nitializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-f                                                                                                                                                                unction -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-o                                                                                                                                                                verride -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-colo                                                                                                                                                                r=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapp                                                                                                                                                                ing-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PER                                                                                                                                                                F_WITH_AVX512=1, TORCH_VERSION=2.3.0, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1,                                                                                                                                                                 USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNP                                                                                                                                                                ACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,

TorchVision: 0.18.0+cu118
LMDeploy: 0.5.3+
transformers: 4.41.2
gradio: 4.25.0
fastapi: 0.110.1
pydantic: 2.6.4
triton: 2.3.0
NVIDIA Topology:
        GPU0    GPU1    GPU2    GPU3    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      SYS     SYS     SYS     0-19,40-59      0               N/A
GPU1    SYS      X      SYS     SYS     0-19,40-59      0               N/A
GPU2    SYS     SYS      X      SYS     20-39,60-79     1               N/A
GPU3    SYS     SYS     SYS      X      20-39,60-79     1               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA                                                                                                                                                                 node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

Error traceback

2024-09-06 02:46:34,547 - lmdeploy - WARNING - Fallback to pytorch engine because `/root/.cache/huggingface/hub/models--THUDM--cogvlm2-llama3-chinese-chat-19B/snapshots/9ab2ccb0e7b7db52e6ff60c50f12082af3ce12ad` not supported by turbomind engine.
2024-09-06 02:46:37,662 - lmdeploy - ERROR - matching vision model: CogVLMVisionModel failed
Traceback (most recent call last):
  File "/usr/bin/lmdeploy", line 8, in <module>
    sys.exit(run())
  File "/usr/lib/python3.10/site-packages/lmdeploy/cli/entrypoint.py", line 36, in run
    args.run(args)
  File "/usr/lib/python3.10/site-packages/lmdeploy/cli/serve.py", line 298, in api_server
    run_api_server(args.model_path,
  File "/usr/lib/python3.10/site-packages/lmdeploy/serve/openai/api_server.py", line 1285, in serve
    VariableInterface.async_engine = pipeline_class(
  File "/usr/lib/python3.10/site-packages/lmdeploy/serve/vl_async_engine.py", line 21, in __init__
    self.vl_encoder = ImageEncoder(model_path,
  File "/usr/lib/python3.10/site-packages/lmdeploy/vl/engine.py", line 85, in __init__
    self.model = load_vl_model(model_path, backend_config=backend_config)
  File "/usr/lib/python3.10/site-packages/lmdeploy/vl/model/builder.py", line 55, in load_vl_model
    return module(**kwargs)
  File "/usr/lib/python3.10/site-packages/lmdeploy/vl/model/base.py", line 31, in __init__
    self.build_model()
  File "/usr/lib/python3.10/site-packages/lmdeploy/vl/model/cogvlm.py", line 48, in build_model
    device_map = infer_auto_device_map(
  File "/usr/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 1238, in infer_auto_device_map
    tied_parameters = find_tied_parameters(model)
  File "/usr/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 708, in find_tied_parameters
    all_named_parameters = {name: param for name, param in _get_named_parameters(model, remove_duplicate=False)}
  File "/usr/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 708, in <dictcomp>
    all_named_parameters = {name: param for name, param in _get_named_parameters(model, remove_duplicate=False)}
  File "/usr/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 667, in _get_named_parameters
    members = module._parameters.items()
AttributeError: 'NoneType' object has no attribute '_parameters'
@RunningLeon
Copy link
Collaborator

refer to #2055 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants