summaryrefslogtreecommitdiffstats
path: root/README.md
diff options
context:
space:
mode:
authorVaibhav Chauhan <95951482+rover07@users.noreply.github.com>2023-10-24 21:10:12 +0200
committerGitHub <noreply@github.com>2023-10-24 21:10:12 +0200
commit37dba12e96b192b77f9ca7af1604e92b516f5080 (patch)
tree277d2afc58444946fc6dca89c39ab9e2e4035e60 /README.md
parent~ | g4f `v-0.1.7.7` (diff)
downloadgpt4free-37dba12e96b192b77f9ca7af1604e92b516f5080.tar
gpt4free-37dba12e96b192b77f9ca7af1604e92b516f5080.tar.gz
gpt4free-37dba12e96b192b77f9ca7af1604e92b516f5080.tar.bz2
gpt4free-37dba12e96b192b77f9ca7af1604e92b516f5080.tar.lz
gpt4free-37dba12e96b192b77f9ca7af1604e92b516f5080.tar.xz
gpt4free-37dba12e96b192b77f9ca7af1604e92b516f5080.tar.zst
gpt4free-37dba12e96b192b77f9ca7af1604e92b516f5080.zip
Diffstat (limited to '')
-rw-r--r--README.md12
1 files changed, 6 insertions, 6 deletions
diff --git a/README.md b/README.md
index bcaf1841..f313156f 100644
--- a/README.md
+++ b/README.md
@@ -300,16 +300,16 @@ response = g4f.ChatCompletion.create(
print(f"Result:", response)
```
-### interference openai-proxy api (use with openai python package)
+### interference openai-proxy API (use with openai python package)
-#### run interference api from pypi package:
+#### run interference API from pypi package:
```py
from g4f.api import run_api
run_api()
```
-#### run interference api from repo:
+#### run interference API from repo:
If you want to use the embedding function, you need to get a Hugging Face token. You can get one at https://huggingface.co/settings/tokens make sure your role is set to write. If you have your token, just use it instead of the OpenAI api-key.
run server:
@@ -539,7 +539,7 @@ Call in your terminal the "create_provider" script:
python etc/tool/create_provider.py
```
1. Enter your name for the new provider.
-2. Copy & Paste a cURL command from your browser developer tools.
+2. Copy and paste a cURL command from your browser developer tools.
3. Let the AI ​​create the provider for you.
4. Customize the provider according to your needs.
@@ -571,8 +571,8 @@ class HogeService(AsyncGeneratorProvider):
yield ""
```
-4. Here, you can adjust the settings, for example if the website does support streaming, set `supports_stream` to `True`...
-5. Write code to request the provider in `create_async_generator` and `yield` the response, _even if_ its a one-time response, do not hesitate to look at other providers for inspiration
+4. Here, you can adjust the settings, for example, if the website does support streaming, set `supports_stream` to `True`...
+5. Write code to request the provider in `create_async_generator` and `yield` the response, _even if_ it's a one-time response, do not hesitate to look at other providers for inspiration
6. Add the Provider Name in [g4f/provider/**init**.py](./g4f/provider/__init__.py)
```py