Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: Add support for returning prompt and completion token count #2243

Open
nanddeepn opened this issue Dec 21, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@nanddeepn
Copy link

Scenario

Hi, We want to track the tokens used on prompt and completion via app developed with Teams AI.

Solution

Prompt tokens:

Below code can help us get the prompt text

app.activity(ActivityTypes.Message, async (context: TurnContext, state: ApplicationTurnState) => {
    let message = context.activity.text;
    await context.sendActivity(`You said: ${message}`);
});

Similarly, the context should return tokens used by input prompt.

Completion tokens:
Not sure, if there is any option or method available to get the completion text via code?
Similarly there should be an option to get the completion tokens.

Thank you

Additional Context

No response

@nanddeepn nanddeepn added the enhancement New feature or request label Dec 21, 2024
@zedhaque
Copy link

Hi,

You can count the tokens yourself in Python by importing a tokenizer and using it to count the tokens. However, this isn’t the most reliable solution.

from teams.ai.tokenizers import Tokenizer
tokens = len(tokenizer.encode(message))  # Something like this

A better way is to use the "usage data" included in the REST API responses from Azure OpenAI/OpenAI. The usage information is already part of the chat completion response.

You can find details here:
https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#chat-completions

Look for this section:

    "usage": {
      "completion_tokens": 557,
      "prompt_tokens": 33,
      "total_tokens": 590
    }

If the teams-ai library exposed this data for you to access, similar to how you access messages, it would be more convenient.
For more information, see these tickets:

#2062
#2119

What language are you planning to use?

@yonggang-xiao
Copy link

yonggang-xiao commented Dec 26, 2024

In OpenAIModel.js file
const completion = await this._client.chat.completions.create(params);
then you can use console.log to see completion.usage data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants