summaryrefslogtreecommitdiffstats
path: root/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'README.md')
-rw-r--r--README.md31
1 files changed, 31 insertions, 0 deletions
diff --git a/README.md b/README.md
index f54750d6..a4365e09 100644
--- a/README.md
+++ b/README.md
@@ -18,6 +18,17 @@ pip install -U g4f
```sh
docker pull hlohaus789/g4f
```
+# To do
+As per the survey, here is a list of improvements to come
+- [ ] Improve Documentation (on g4f.mintlify.app) & Do video tutorials
+- [ ] Improve the provider status list & updates
+- [ ] Tutorials on how to reverse sites to write your own wrapper (PoC only ofc)
+- [ ] Improve the Bing wrapper. (might write a new wrapper in golang as it is very fast)
+- [ ] Write a standard provider performance test to improve the stability
+- [ ] update the repository to include the new openai library syntax (ex: `Openai()` class)
+- [ ] Potential support and development of local models
+- [ ] improve compatibility and error handling
+
## 🆕 What's New
- <a href="./README-DE.md"><img src="https://img.shields.io/badge/öffnen in-🇩🇪 deutsch-bleu.svg" alt="Öffnen en DE"></a>
@@ -416,6 +427,26 @@ if __name__ == "__main__":
main()
```
+## API usage (POST)
+#### Chat completions
+Send the POST request to /v1/chat/completions with body containing the `model` method. This example uses python with requests library:
+```python
+import requests
+url = "http://localhost:1337/v1/chat/completions"
+body = {
+ "model": "gpt-3.5-turbo-16k",
+ "stream": False,
+ "messages": [
+ {"role": "assistant", "content": "What can you do?"}
+ ]
+}
+json_response = requests.post(url, json=body).json().get('choices', [])
+
+for choice in json_response:
+ print(choice.get('message', {}).get('content', ''))
+```
+
+
## 🚀 Providers and Models
### GPT-4