[Request] Initiate DeepSeek-R1's response with "\<think\>\n" at the beginning of every output #6152
hcygnaw
started this conversation in
LLM Usage | 语言模型研究
Replies: 3 comments 3 replies
-
👀 @hcygnaw Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. |
Beta Was this translation helpful? Give feedback.
0 replies
-
自己加一个提示词应该就好,我瞎写了一个你可以试试?
|
Beta Was this translation helpful? Give feedback.
0 replies
-
赞!一定程度上可以让R1开始思考。但并不是强制的,依然经常会有跳过思考直接对话的情况存在。 本地部署的LM studio里面可以通过编辑模型的回复,然后让它继续生成文本从而强制让它思考。不知道现有的deepseek的api能不能做到强制以某段对话开始。 |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
🥰 需求描述
在使用 DeepSeek-R1 系列模型时,会有一个现象:当模型响应某些查询时,往往会跳过思考模式的输出(即缺少
"<think>\n\n</think>"
部分)。这一现象可能会对模型的性能产生一定的负面影响。根据官方的建议,为了确保模型在响应时能够进行充分的推理,希望能强制模型在每一条输出的开头添加
"<think>\n"
。是否有办法在 Lobechat 中实现该功能?🧐 解决方案
Enforcing DeepSeek-R1 to initiate its response with "<think>\n" at the beginning of every output.
📝 补充信息
No response
Beta Was this translation helpful? Give feedback.
All reactions