summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorHeiner Lohaus <hlohaus@users.noreply.github.com>2024-02-21 17:02:54 +0100
committerHeiner Lohaus <hlohaus@users.noreply.github.com>2024-02-21 17:02:54 +0100
commit0a0698c7f3fa117e95eaf9b017e4122d15ef4566 (patch)
tree8d2997750aef7bf9ae0f9ec63410279119fffb69 /docs
parentMerge pull request #1603 from xtekky/index (diff)
downloadgpt4free-0a0698c7f3fa117e95eaf9b017e4122d15ef4566.tar
gpt4free-0a0698c7f3fa117e95eaf9b017e4122d15ef4566.tar.gz
gpt4free-0a0698c7f3fa117e95eaf9b017e4122d15ef4566.tar.bz2
gpt4free-0a0698c7f3fa117e95eaf9b017e4122d15ef4566.tar.lz
gpt4free-0a0698c7f3fa117e95eaf9b017e4122d15ef4566.tar.xz
gpt4free-0a0698c7f3fa117e95eaf9b017e4122d15ef4566.tar.zst
gpt4free-0a0698c7f3fa117e95eaf9b017e4122d15ef4566.zip
Diffstat (limited to 'docs')
-rw-r--r--docs/client.md12
-rw-r--r--docs/docker.md19
-rw-r--r--docs/git.md66
-rw-r--r--docs/interference.md69
-rw-r--r--docs/leagcy.md18
-rw-r--r--docs/requirements.md10
6 files changed, 181 insertions, 13 deletions
diff --git a/docs/client.md b/docs/client.md
index 8e02b581..f2ba9bcd 100644
--- a/docs/client.md
+++ b/docs/client.md
@@ -44,10 +44,22 @@ client = Client(
You can use the `ChatCompletions` endpoint to generate text completions as follows:
```python
+response = client.chat.completions.create(
+ model="gpt-3.5-turbo",
+ messages=[{"role": "user", "content": "Say this is a test"}],
+ ...
+)
+print(response.choices[0].message.content)
+```
+
+Also streaming are supported:
+
+```python
stream = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test"}],
stream=True,
+ ...
)
for chunk in stream:
if chunk.choices[0].delta.content:
diff --git a/docs/docker.md b/docs/docker.md
index 6baf386a..db33b925 100644
--- a/docs/docker.md
+++ b/docs/docker.md
@@ -1,38 +1,37 @@
-### G4F - Docker
+### G4F - Docker Setup
-If you have Docker installed, you can easily set up and run the project without manually installing dependencies.
-
-1. First, ensure you have both Docker and Docker Compose installed.
+Easily set up and run the G4F project using Docker without the hassle of manual dependency installation.
+1. **Prerequisites:**
- [Install Docker](https://docs.docker.com/get-docker/)
- [Install Docker Compose](https://docs.docker.com/compose/install/)
-2. Clone the GitHub repo:
+2. **Clone the Repository:**
```bash
git clone https://github.com/xtekky/gpt4free.git
```
-3. Navigate to the project directory:
+3. **Navigate to the Project Directory:**
```bash
cd gpt4free
```
-4. Build the Docker image:
+4. **Build the Docker Image:**
```bash
docker pull selenium/node-chrome
docker-compose build
```
-5. Start the service using Docker Compose:
+5. **Start the Service:**
```bash
docker-compose up
```
-Your server will now be running at `http://localhost:1337`. You can interact with the API or run your tests as you would normally.
+Your server will now be accessible at `http://localhost:1337`. Interact with the API or run tests as usual.
To stop the Docker containers, simply run:
@@ -41,6 +40,6 @@ docker-compose down
```
> [!Note]
-> When using Docker, any changes you make to your local files will be reflected in the Docker container thanks to the volume mapping in the `docker-compose.yml` file. If you add or remove dependencies, however, you'll need to rebuild the Docker image using `docker-compose build`.
+> Changes made to local files reflect in the Docker container due to volume mapping in `docker-compose.yml`. However, if you add or remove dependencies, rebuild the Docker image using `docker-compose build`.
[Return to Home](/) \ No newline at end of file
diff --git a/docs/git.md b/docs/git.md
new file mode 100644
index 00000000..89137ffc
--- /dev/null
+++ b/docs/git.md
@@ -0,0 +1,66 @@
+### G4F - Installation Guide
+
+Follow these steps to install G4F from the source code:
+
+1. **Clone the Repository:**
+
+```bash
+git clone https://github.com/xtekky/gpt4free.git
+```
+
+2. **Navigate to the Project Directory:**
+
+```bash
+cd gpt4free
+```
+
+3. **(Optional) Create a Python Virtual Environment:**
+
+It's recommended to isolate your project dependencies. You can follow the [Python official documentation](https://docs.python.org/3/tutorial/venv.html) for virtual environments.
+
+```bash
+python3 -m venv venv
+```
+
+4. **Activate the Virtual Environment:**
+
+- On Windows:
+
+```bash
+.\venv\Scripts\activate
+```
+
+- On macOS and Linux:
+
+```bash
+source venv/bin/activate
+```
+
+5. **Install Minimum Requirements:**
+
+Install the minimum required packages:
+
+```bash
+pip install -r requirements-min.txt
+```
+
+6. **Or Install All Packages from `requirements.txt`:**
+
+If you prefer, you can install all packages listed in `requirements.txt`:
+
+```bash
+pip install -r requirements.txt
+```
+
+7. **Start Using the Repository:**
+
+You can now create Python scripts and utilize the G4F functionalities. Here's a basic example:
+
+Create a `test.py` file in the root folder and start using the repository:
+
+```python
+import g4f
+# Your code here
+```
+
+[Return to Home](/) \ No newline at end of file
diff --git a/docs/interference.md b/docs/interference.md
new file mode 100644
index 00000000..b140f66a
--- /dev/null
+++ b/docs/interference.md
@@ -0,0 +1,69 @@
+### Interference openai-proxy API
+
+#### Run interference API from PyPi package
+
+```python
+from g4f.api import run_api
+
+run_api()
+```
+
+#### Run interference API from repo
+
+Run server:
+
+```sh
+g4f api
+```
+
+or
+
+```sh
+python -m g4f.api.run
+```
+
+```python
+from openai import OpenAI
+
+client = OpenAI(
+ api_key="",
+ # Change the API base URL to the local interference API
+ base_url="http://localhost:1337/v1"
+)
+
+ response = client.chat.completions.create(
+ model="gpt-3.5-turbo",
+ messages=[{"role": "user", "content": "write a poem about a tree"}],
+ stream=True,
+ )
+
+ if isinstance(response, dict):
+ # Not streaming
+ print(response.choices[0].message.content)
+ else:
+ # Streaming
+ for token in response:
+ content = token.choices[0].delta.content
+ if content is not None:
+ print(content, end="", flush=True)
+```
+
+#### API usage (POST)
+Send the POST request to /v1/chat/completions with body containing the `model` method. This example uses python with requests library:
+```python
+import requests
+url = "http://localhost:1337/v1/chat/completions"
+body = {
+ "model": "gpt-3.5-turbo-16k",
+ "stream": False,
+ "messages": [
+ {"role": "assistant", "content": "What can you do?"}
+ ]
+}
+json_response = requests.post(url, json=body).json().get('choices', [])
+
+for choice in json_response:
+ print(choice.get('message', {}).get('content', ''))
+```
+
+[Return to Home](/) \ No newline at end of file
diff --git a/docs/leagcy.md b/docs/leagcy.md
index 224bc098..e8808381 100644
--- a/docs/leagcy.md
+++ b/docs/leagcy.md
@@ -179,4 +179,22 @@ async def run_all():
asyncio.run(run_all())
```
+##### Proxy and Timeout Support
+
+All providers support specifying a proxy and increasing timeout in the create functions.
+
+```python
+import g4f
+
+response = g4f.ChatCompletion.create(
+ model=g4f.models.default,
+ messages=[{"role": "user", "content": "Hello"}],
+ proxy="http://host:port",
+ # or socks5://user:pass@host:port
+ timeout=120, # in secs
+)
+
+print(f"Result:", response)
+```
+
[Return to Home](/) \ No newline at end of file
diff --git a/docs/requirements.md b/docs/requirements.md
index 7715a403..a4137a64 100644
--- a/docs/requirements.md
+++ b/docs/requirements.md
@@ -6,15 +6,19 @@ You can install requirements partially or completely. So G4F can be used as you
#### Options
-Install required packages for the OpenaiChat provider:
+Install g4f with all possible dependencies:
+```
+pip install -U g4f[all]
+```
+Or install only g4f and the required packages for the OpenaiChat provider:
```
pip install -U g4f[openai]
```
-Install required packages for the interference api:
+Install required packages for the Interference API:
```
pip install -U g4f[api]
```
-Install required packages for the web interface:
+Install required packages for the Web UI:
```
pip install -U g4f[gui]
```