summaryrefslogtreecommitdiffstats
path: root/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'README.md')
-rw-r--r--README.md20
1 files changed, 12 insertions, 8 deletions
diff --git a/README.md b/README.md
index f313156f..5a86b786 100644
--- a/README.md
+++ b/README.md
@@ -1,8 +1,11 @@
![248433934-7886223b-c1d1-4260-82aa-da5741f303bb](https://github.com/xtekky/gpt4free/assets/98614666/ea012c87-76e0-496a-8ac4-e2de090cc6c9)
-By using this repository or any code related to it, you agree to the [legal notice](./LEGAL_NOTICE.md). The author is not responsible for any copies, forks, re-uploads made by other users, or anything else related to GPT4Free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses.
-- latest pypi version: [`0.1.7.7`](https://pypi.org/project/g4f/0.1.7.7)
+> **Note**
+> By using this repository or any code related to it, you agree to the [legal notice](./LEGAL_NOTICE.md). The author is not responsible for any copies, forks, re-uploads made by other users, or anything else related to GPT4Free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses.
+
+> **Note**
+> Latest pypi version: [`0.1.7.7`](https://pypi.org/project/g4f/0.1.7.7)
```sh
pip install -U g4f
```
@@ -124,7 +127,7 @@ docker compose build
docker compose up
```
-You server will now be running at `http://localhost:1337`. You can interact with the API or run your tests as you would normally.
+Your server will now be running at `http://localhost:1337`. You can interact with the API or run your tests as you would normally.
To stop the Docker containers, simply run:
@@ -132,7 +135,8 @@ To stop the Docker containers, simply run:
docker compose down
```
-**Note:** When using Docker, any changes you make to your local files will be reflected in the Docker container thanks to the volume mapping in the `docker-compose.yml` file. If you add or remove dependencies, however, you'll need to rebuild the Docker image using `docker compose build`.
+> **Note**
+> When using Docker, any changes you make to your local files will be reflected in the Docker container thanks to the volume mapping in the `docker-compose.yml` file. If you add or remove dependencies, however, you'll need to rebuild the Docker image using `docker compose build`.
## Usage
@@ -327,7 +331,7 @@ python -m g4f.api
```py
import openai
-openai.api_key = "Empty if you don't use embeddings, otherwise your hugginface token"
+openai.api_key = " Leave Empty if you don't use embeddings, otherwise your Hugging Face token"
openai.api_base = "http://localhost:1337/v1"
@@ -546,8 +550,8 @@ python etc/tool/create_provider.py
#### Create Provider
1. Check out the current [list of potential providers](https://github.com/zukixa/cool-ai-stuff#ai-chat-websites), or find your own provider source!
-2. Create a new file in [g4f/provider](./g4f/provider) with the name of the Provider
-3. Implement a class that extends [BaseProvider](./g4f/provider/base_provider.py).
+2. Create a new file in [g4f/Provider](./g4f/Provider) with the name of the Provider
+3. Implement a class that extends [BaseProvider](./g4f/Provider/base_provider.py).
```py
from __future__ import annotations
@@ -573,7 +577,7 @@ class HogeService(AsyncGeneratorProvider):
4. Here, you can adjust the settings, for example, if the website does support streaming, set `supports_stream` to `True`...
5. Write code to request the provider in `create_async_generator` and `yield` the response, _even if_ it's a one-time response, do not hesitate to look at other providers for inspiration
-6. Add the Provider Name in [g4f/provider/**init**.py](./g4f/provider/__init__.py)
+6. Add the Provider Name in [g4f/Provider/**init**.py](./g4f/Provider/__init__.py)
```py
from .HogeService import HogeService