Derek Mitchell
10/10/2024, 5:18 PMimport os
from openai import OpenAI
from flask import Flask, request
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
app = Flask(__name__)
OpenAIInstrumentor().instrument()
client = OpenAI(
api_key=os.environ.get("MOCK_GPT_API_KEY"),
base_url="<https://mockgpt.wiremockapi.cloud/v1>"
)
@app.route("/askquestion", methods=['POST'])
def ask_question():
data = request.json
question = data.get('question')
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": question}
]
)
return completion.choices[0].message.content
It used to work fine, but now I encounter the following error:
Failed to encode key gen_ai.response.model: Invalid type <class 'NoneType'> of value None
Traceback (most recent call last):
File "/Users/derekmitchell/Temp/openai-test/openai-env/lib/python3.9/site-packages/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py", line 111, in _encode_attributes
pb2_attributes.append(_encode_key_value(key, value))
File "/Users/derekmitchell/Temp/openai-test/openai-env/lib/python3.9/site-packages/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py", line 92, in _encode_key_value
return PB2KeyValue(key=key, value=_encode_value(value))
File "/Users/derekmitchell/Temp/openai-test/openai-env/lib/python3.9/site-packages/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py", line 88, in _encode_value
raise Exception(f"Invalid type {type(value)} of value {value}")
Exception: Invalid type <class 'NoneType'> of value None
The problem is that MockGPT is returning a model of "None" in the response, even though I set the model to "gpt-3.5-turbo" in the request, which is causing issues with OpenTelemetry instrumentation:
ChatCompletion(id='chatcmpl-123', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Hello!\n\nThis is the default MockGPT response.\n\nCreate your own version in WireMock Cloud to fully customise this mock API.\n\n', refusal=None, role='assistant', function_call=None, tool_calls=None))], created=1728579960, model='gpt-3.5-turbo', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=12, prompt_tokens=9, total_tokens=21))
127.0.0.1 - - [10/Oct/2024 10:06:00] "POST /askquestion HTTP/1.1" 200 -
In comparison, the same request sent to OpenAI's API returns a model:
ChatCompletion(id='chatcmpl-AGqfIFIyJzEqISoGz46VEsyfYLrLe', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Hello! How can I assist you today?', role='assistant', function_call=None, tool_calls=None, refusal=None))], created=1728579056, model='gpt-3.5-turbo-0125', object='chat.completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=9, prompt_tokens=10, total_tokens=19, prompt_tokens_details={'cached_tokens': 0}, completion_tokens_details={'reasoning_tokens': 0}))
Could you please confirm if this behavior has changed, and if so, is it possible to return the model in the response again as the OpenAI API does?Tom
10/11/2024, 9:08 AMDerek Mitchell
10/11/2024, 2:01 PMopenai 1.43.0
opentelemetry-instrumentation-openai 0.28.2
Tom
10/14/2024, 3:43 PM