Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

服务器部署问题 #37

Open
airsxue opened this issue May 17, 2024 · 2 comments
Open

服务器部署问题 #37

airsxue opened this issue May 17, 2024 · 2 comments

Comments

@airsxue
Copy link

airsxue commented May 17, 2024

请问我用这个demo在linux运行v2需要安装哪些环境?谢谢
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig

model_name = "deepseek-ai/DeepSeek-V2-Chat"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)

max_memory should be set based on your devices

max_memory = {i: "75GB" for i in range(8)}

device_map cannot be set to auto

model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, device_map="sequential", torch_dtype=torch.bfloat16, max_memory=max_memory, attn_implementation="eager")
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id

messages = [
{"role": "user", "content": "Write a piece of quicksort code in C++"}
]
input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)

result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)
print(result)

@haichuan1221
Copy link

请问我用这个demo在linux运行v2需要安装哪些环境?谢谢 import torch from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig

model_name = "deepseek-ai/DeepSeek-V2-Chat" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)

max_memory should be set based on your devices

max_memory = {i: "75GB" for i in range(8)}

device_map cannot be set to auto

model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, device_map="sequential", torch_dtype=torch.bfloat16, max_memory=max_memory, attn_implementation="eager") model.generation_config = GenerationConfig.from_pretrained(model_name) model.generation_config.pad_token_id = model.generation_config.eos_token_id

messages = [ {"role": "user", "content": "Write a piece of quicksort code in C++"} ] input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt") outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)

result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True) print(result)

同问

@stack-heap-overflow
Copy link
Contributor

需要安装的python库包括:torchtransformersaccelerate

相关库不同版本的兼容性并没有详细测过,这里可以给一个我测试用的环境供参考,不一定需要严格符合:

  • torch == 2.1.0
  • transformers == 4.39.3
  • accelerate == 0.29.3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants