Today I experimented with hooking up a REST API to OpenAI via the Chat Completion “tool” framework (originally I had intended to use a Model Context Protocol server, but discovered that OpenAI’s integration for that was in a state of disarray…)
Components:
- An OpenWeather api key
- An OpenAI platform api key
- A lambda that encapsulates calls to the weather api
- A python script that uses a “tool” definition to call the lambda
The Lambda π
This is pretty straightforward:
import os
import json
import urllib.request
def lambda_handler(event, context):
city = event.get("city", "Exeter")
api_key = os.environ["OPENWEATHER_API_KEY"]
url = f"https://api.openweathermap.org/data/2.5/weather?q={city}&appid={api_key}&units=metric"
with urllib.request.urlopen(url) as response:
res = response.read()
data = json.loads(res)
return {
"location": city,
"temperature_c": data["main"]["temp"],
"description": data["weather"][0]["description"]
}
This will call the basic weather api with the city as a query term. I also exposed this lambda via an API Gateway route.
The Python Script π
I will include the actual source below, but it needs some explanation. There are 3 main parts:
- defining the “tool” call
- sending the initial query - which returns a tool call
- sending the initial query again along with the results of the tool call, to make the actual call
This seems a bit repetitive, but the two calls are necessary because Chat Completions are stateless.
Defining The Tool: π
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a city",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city to get weather for"
}
},
"required": ["city"]
}
}
}
]
This is not that different from MCP descriptions and serves a similar purpose (at least in this use case): exposing the weather api as a “tool” that ChatGPT can use.
The Requests π
The first request will be of this form and returns a tool call:
[
{"role": "user", "content": "What is the weather in Exeter today?"}
]
The second set of requests make the query in conjunction with the tool call:
[
{"role": "user", "content": "What is the weather in Exeter today?"},
{"role": "assistant", "tool_calls": [...]},
{"role": "tool", "tool_call_id": ..., "name": "get_weather", "content": "{...API response...}"}
]
The Weather π
So what is the weather? It’s quite pleasant right now:
Assistant: The weather in Exeter today is currently few clouds with a temperature of around 24.6Β°C.
Conclusion π
There are quite a few steps here. And ChatGPT could no doubt just answer this question directly! However I can see how it would be useful for rather more complex APIs. It is tedious having to manage the context manually though, but this is a design decision by OpenAI to give us full control (whether we want it or not).
Script Source π
Here is the full code to the script. It’s hardcoded to Exeter:
import openai
import os
import json
import requests
openai.api_key = os.getenv("OPENAI_API_KEY") or "sk-..."
# Your deployed weather endpoint (POST /get_weather)
WEATHER_API_URL = os.getenv("WEATHER_API_URL") # β replace with your actual URL
# Define tool schema (function)
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a city",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city to get weather for"
}
},
"required": ["city"]
}
}
}
]
# Step 1: Send user message and tool definition
response = openai.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "What is the weather in Exeter today?"}
],
tools=tools,
tool_choice="auto"
)
message = response.choices[0].message
# Step 2: Handle tool call (if any)
if message.tool_calls:
for tool_call in message.tool_calls:
if tool_call.function.name == "get_weather":
args = json.loads(tool_call.function.arguments)
city = args["city"]
# Call your real API
res = requests.post(WEATHER_API_URL, json={"city": city})
weather_data = res.json()
# Step 3: Send tool result back to model
second_response = openai.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "What is the weather in Exeter today?"},
{
"role": "assistant",
"tool_calls": [tool_call]
},
{
"role": "tool",
"tool_call_id": tool_call.id,
"name": "get_weather",
"content": json.dumps(weather_data)
}
]
)
# Final answer
print("Assistant:", second_response.choices[0].message.content)
else:
print("Assistant:", message.content)