summaryrefslogtreecommitdiffstats
path: root/docs/interference.md
blob: b140f66a9a4c37550e5a47fdec258ff47aa4a41a (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
### Interference openai-proxy API

#### Run interference API from PyPi package

```python
from g4f.api import run_api

run_api()
```

#### Run interference API from repo

Run server:

```sh
g4f api
```

or

```sh
python -m g4f.api.run
```

```python
from openai import OpenAI

client = OpenAI(
    api_key="",
    # Change the API base URL to the local interference API
    base_url="http://localhost:1337/v1"
)

  response = client.chat.completions.create(
      model="gpt-3.5-turbo",
      messages=[{"role": "user", "content": "write a poem about a tree"}],
      stream=True,
  )

  if isinstance(response, dict):
      # Not streaming
      print(response.choices[0].message.content)
  else:
      # Streaming
      for token in response:
          content = token.choices[0].delta.content
          if content is not None:
              print(content, end="", flush=True)
```

####  API usage (POST)
Send the POST request to /v1/chat/completions with body containing the `model` method. This example uses python with requests library:
```python
import requests
url = "http://localhost:1337/v1/chat/completions"
body = {
    "model": "gpt-3.5-turbo-16k",
    "stream": False,
    "messages": [
        {"role": "assistant", "content": "What can you do?"}
    ]
}
json_response = requests.post(url, json=body).json().get('choices', [])

for choice in json_response:
    print(choice.get('message', {}).get('content', ''))
```

[Return to Home](/)