File size: 2,501 Bytes
447ebeb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
# import openai, json, time, asyncio
# client = openai.AsyncOpenAI(
#     api_key="sk-1234",
#     base_url="http://0.0.0.0:8000"
# )

# super_fake_messages = [
#   {
#     "role": "user",
#     "content": f"What's the weather like in San Francisco, Tokyo, and Paris? {time.time()}"
#   },
#   {
#     "content": None,
#     "role": "assistant",
#     "tool_calls": [
#       {
#         "id": "1",
#         "function": {
#           "arguments": "{\"location\": \"San Francisco\", \"unit\": \"celsius\"}",
#           "name": "get_current_weather"
#         },
#         "type": "function"
#       },
#       {
#         "id": "2",
#         "function": {
#           "arguments": "{\"location\": \"Tokyo\", \"unit\": \"celsius\"}",
#           "name": "get_current_weather"
#         },
#         "type": "function"
#       },
#       {
#         "id": "3",
#         "function": {
#           "arguments": "{\"location\": \"Paris\", \"unit\": \"celsius\"}",
#           "name": "get_current_weather"
#         },
#         "type": "function"
#       }
#     ]
#   },
#   {
#     "tool_call_id": "1",
#     "role": "tool",
#     "name": "get_current_weather",
#     "content": "{\"location\": \"San Francisco\", \"temperature\": \"90\", \"unit\": \"celsius\"}"
#   },
#   {
#     "tool_call_id": "2",
#     "role": "tool",
#     "name": "get_current_weather",
#     "content": "{\"location\": \"Tokyo\", \"temperature\": \"30\", \"unit\": \"celsius\"}"
#   },
#   {
#     "tool_call_id": "3",
#     "role": "tool",
#     "name": "get_current_weather",
#     "content": "{\"location\": \"Paris\", \"temperature\": \"50\", \"unit\": \"celsius\"}"
#   }
# ]

# async def chat_completions():
#     super_fake_response = await client.chat.completions.create(
#         model="gpt-3.5-turbo",
#         messages=super_fake_messages,
#         seed=1337,
#         stream=False
#     )  # get a new response from the model where it can see the function response
#     await asyncio.sleep(1)
#     return super_fake_response

# async def loadtest_fn(n = 1):
#     global num_task_cancelled_errors, exception_counts, chat_completions
#     start = time.time()
#     tasks = [chat_completions() for _ in range(n)]
#     chat_completions = await asyncio.gather(*tasks)
#     successful_completions = [c for c in chat_completions if c is not None]
#     print(n, time.time() - start, len(successful_completions))

# # print(json.dumps(super_fake_response.model_dump(), indent=4))

# asyncio.run(loadtest_fn())