Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could you provide a Docker image file? Howdown cannot be installed successfully. #7

Open
Yangfan-96 opened this issue Feb 5, 2024 · 5 comments

Comments

@Yangfan-96
Copy link

No description provided.

@zhuangshaobin
Copy link
Collaborator

I appreciate your willingness to help, and I apologize for any confusion.

Since I'm not currently in a Docker environment, I can attempt to assist you further if you provide a more detailed description of the issue you're facing.

Please share any error messages or specific challenges you've encountered, and I'll do my best to help you troubleshoot the problem.

@Yangfan-96
Copy link
Author

Yangfan-96 commented Feb 6, 2024

I appreciate your willingness to help, and I apologize for any confusion.

Since I'm not currently in a Docker environment, I can attempt to assist you further if you provide a more detailed description of the issue you're facing.

Please share any error messages or specific challenges you've encountered, and I'll do my best to help you troubleshoot the problem.

After replacing bark_ssg with pip install bark, I was able to run the program.Then I found that the name of the encoder in Hugging Face is different from that in the yaml file:image_encoder_path: "pretrained/CLIP-ViT-H-14-laion2B-s32B-b79K". After modifying it, the following error was reported:
Seed set to 3407
Cannot initialize model with low cpu memory usage because accelerate was not found in the environment. Defaulting to low_cpu_mem_usage=False. It is strongly recommended to install accelerate for faster and less memory-intense model loading. You can do so with:

pip install accelerate

.
Warnning: using half percision for inferencing!
model ready!

protagonists ready!
Traceback (most recent call last):
File "/data/yangfan/gen_video/Vlogger/sample_scripts/vlog_read_script_sample.py", line 303, in
main(omega_conf)
File "/data/yangfan/gen_video/Vlogger/sample_scripts/vlog_read_script_sample.py", line 157, in main
video_list = readscript(args.script_file_path)
File "/data/yangfan/gen_video/Vlogger/vlogger/planning_utils/gpt4_utils.py", line 530, in readscript
video_fragments = ast.literal_eval(script)
File "/home/yangfan/anaconda3/envs/vlogger/lib/python3.10/ast.py", line 64, in literal_eval
node_or_string = parse(node_or_string.lstrip(" \t"), mode='eval')
File "/home/yangfan/anaconda3/envs/vlogger/lib/python3.10/ast.py", line 50, in parse
return compile(source, filename, mode, flags,
ValueError: source code string cannot contain null bytes

@zhuangshaobin
Copy link
Collaborator

Firstly, it appears that you have not installed the accelerate package. Installing it can save GPU memory and accelerate inference.

Secondly, based on the provided error information, potential issues can be classified into two scenarios:

  1. If you did not use the default configuration file, configs/vlog_read_script_sample.yaml, for sample_scripts/vlog_read_script_sample.py and instead made modifications to the script file, you should compare it with the scripts in the repository I provided. Check for any differences in formatting, as content generated by GPT may occasionally deviate from the interface I designed.

  2. If you used the default configuration file, configs/vlog_read_script_sample.yaml, it's possible that you accidentally modified some scripts within it. In this case, you can directly download and replace the original files with the script files I provided.

@Yangfan-96
Copy link
Author

Firstly, it appears that you have not installed the accelerate package. Installing it can save GPU memory and accelerate inference.

Secondly, based on the provided error information, potential issues can be classified into two scenarios:

  1. If you did not use the default configuration file, configs/vlog_read_script_sample.yaml, for sample_scripts/vlog_read_script_sample.py and instead made modifications to the script file, you should compare it with the scripts in the repository I provided. Check for any differences in formatting, as content generated by GPT may occasionally deviate from the interface I designed.
  2. If you used the default configuration file, configs/vlog_read_script_sample.yaml, it's possible that you accidentally modified some scripts within it. In this case, you can directly download and replace the original files with the script files I provided.

I did not modify the video_prompts.txt。Could you tell me the version number of your ast?

@zhuangshaobin
Copy link
Collaborator

Firstly, it appears that you have not installed the accelerate package. Installing it can save GPU memory and accelerate inference.
Secondly, based on the provided error information, potential issues can be classified into two scenarios:

  1. If you did not use the default configuration file, configs/vlog_read_script_sample.yaml, for sample_scripts/vlog_read_script_sample.py and instead made modifications to the script file, you should compare it with the scripts in the repository I provided. Check for any differences in formatting, as content generated by GPT may occasionally deviate from the interface I designed.
  2. If you used the default configuration file, configs/vlog_read_script_sample.yaml, it's possible that you accidentally modified some scripts within it. In this case, you can directly download and replace the original files with the script files I provided.

I did not modify the video_prompts.txt。Could you tell me the version number of your ast?

My Python interpreter version is 3.10.11, and the ast version corresponds to Python.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants