Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weโ€™ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

๐Ÿ’„ style: add o1-preview and o1-mini model to github model provider #4127

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions src/config/modelProviders/github.ts
Original file line number Diff line number Diff line change
Expand Up @@ -139,6 +139,26 @@ const Github: ModelProviderCard = {
tokens: 128_000,
vision: true,
},
{
description: 'Focused on advanced reasoning and solving complex problems, including math and science tasks. Ideal for applications that require deep contextual understanding and agentic workflows.',
displayName: 'OpenAI o1-preview',
enabled: true,
functionCall: false,
id: 'o1-preview',
maxOutput: 32_768,
tokens: 128_000,
vision: true,
},
{
description: 'Smaller, faster, and 80% cheaper than o1-preview, performs well at code generation and small context operations.',
displayName: 'OpenAI o1-mini',
enabled: true,
functionCall: false,
id: 'o1-mini',
maxOutput: 65_536,
tokens: 128_000,
vision: true,
},
{
description:
'Same Phi-3-medium model, but with a larger context size for RAG or few shot prompting.',
Expand Down
33 changes: 32 additions & 1 deletion src/libs/agent-runtime/github/index.ts
Original file line number Diff line number Diff line change
@@ -1,9 +1,40 @@
import { AgentRuntimeErrorType } from '../error';
import { ModelProvider } from '../types';
import { ChatStreamPayload, ModelProvider, OpenAIChatMessage } from '../types';
import { LobeOpenAICompatibleFactory } from '../utils/openaiCompatibleFactory';

// TODO: ไธดๆ—ถๅ†™ๆณ•๏ผŒๅŽ็ปญ่ฆ้‡ๆž„ๆˆ model card ๅฑ•็คบ้…็ฝฎ
export const o1Models = new Set([
'o1-preview',
'o1-mini',
]);

export const pruneO1Payload = (payload: ChatStreamPayload) => ({
ymefg marked this conversation as resolved.
Show resolved Hide resolved
...payload,
frequency_penalty: 0,
messages: payload.messages.map((message: OpenAIChatMessage) => ({
...message,
role: message.role === 'system' ? 'user' : message.role,
})),
presence_penalty: 0,
stream: false,
temperature: 1,
top_p: 1,
});


export const LobeGithubAI = LobeOpenAICompatibleFactory({
baseURL: 'https://models.inference.ai.azure.com',
chatCompletion: {
handlePayload: (payload) => {
const { model } = payload;

if (o1Models.has(model)) {
return pruneO1Payload(payload) as any;
}

return { ...payload, stream: payload.stream ?? true };
},
},
debug: {
chatCompletion: () => process.env.DEBUG_GITHUB_CHAT_COMPLETION === '1',
},
Expand Down
Loading