summaryrefslogtreecommitdiffstats
path: root/docs/async_client.md
diff options
context:
space:
mode:
Diffstat (limited to 'docs/async_client.md')
-rw-r--r--docs/async_client.md92
1 files changed, 48 insertions, 44 deletions
diff --git a/docs/async_client.md b/docs/async_client.md
index 7194c792..fe6f46ff 100644
--- a/docs/async_client.md
+++ b/docs/async_client.md
@@ -1,9 +1,10 @@
-# G4F - Async client API Guide
-The G4F async client API is a powerful asynchronous interface for interacting with various AI models. This guide provides comprehensive information on how to use the API effectively, including setup, usage examples, best practices, and important considerations for optimal performance.
+
+# G4F - AsyncClient API Guide
+The G4F AsyncClient API is a powerful asynchronous interface for interacting with various AI models. This guide provides comprehensive information on how to use the API effectively, including setup, usage examples, best practices, and important considerations for optimal performance.
## Compatibility Note
-The G4F async client API is designed to be compatible with the OpenAI API, making it easy for developers familiar with OpenAI's interface to transition to G4F.
+The G4F AsyncClient API is designed to be compatible with the OpenAI API, making it easy for developers familiar with OpenAI's interface to transition to G4F.
## Table of Contents
- [Introduction](#introduction)
@@ -26,7 +27,7 @@ The G4F async client API is designed to be compatible with the OpenAI API, makin
## Introduction
-The G4F async client API is an asynchronous version of the standard G4F Client API. It offers the same functionality as the synchronous API but with improved performance due to its asynchronous nature. This guide will walk you through the key features and usage of the G4F async client API.
+The G4F AsyncClient API is an asynchronous version of the standard G4F Client API. It offers the same functionality as the synchronous API but with improved performance due to its asynchronous nature. This guide will walk you through the key features and usage of the G4F AsyncClient API.
## Key Features
@@ -39,13 +40,13 @@ The G4F async client API is an asynchronous version of the standard G4F Client A
## Getting Started
-### Initializing the Client
-**To use the G4F `Client`, create a new instance:**
+### Initializing the AsyncClient
+**To use the G4F `AsyncClient`, create a new instance:**
```python
-from g4f.client import Client
+from g4f.client import AsyncClient
from g4f.Provider import OpenaiChat, Gemini
-client = Client(
+client = AsyncClient(
provider=OpenaiChat,
image_provider=Gemini,
# Add other parameters as needed
@@ -56,7 +57,7 @@ client = Client(
## Creating Chat Completions
**Here’s an improved example of creating chat completions:**
```python
-response = await async_client.chat.completions.create(
+response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{
@@ -77,9 +78,9 @@ You can adjust these parameters based on your specific needs.
### Configuration
-**Configure the `Client` with additional settings:**
+**Configure the `AsyncClient` with additional settings:**
```python
-client = Client(
+client = AsyncClient(
api_key="your_api_key_here",
proxies="http://user:pass@host",
# Add other parameters as needed
@@ -93,12 +94,12 @@ client = Client(
**Generate text completions using the ChatCompletions endpoint:**
```python
import asyncio
-from g4f.client import Client
+from g4f.client import AsyncClient
async def main():
- client = Client()
+ client = AsyncClient()
- response = await client.chat.completions.async_create(
+ response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{
@@ -119,12 +120,12 @@ asyncio.run(main())
**Process responses incrementally as they are generated:**
```python
import asyncio
-from g4f.client import Client
+from g4f.client import AsyncClient
async def main():
- client = Client()
-
- stream = await client.chat.completions.async_create(
+ client = AsyncClient()
+
+ stream = client.chat.completions.create(
model="gpt-4",
messages=[
{
@@ -136,7 +137,7 @@ async def main():
)
async for chunk in stream:
- if chunk.choices[0].delta.content:
+ if chunk.choices and chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
asyncio.run(main())
@@ -150,14 +151,14 @@ asyncio.run(main())
import g4f
import requests
import asyncio
-from g4f.client import Client
+from g4f.client import AsyncClient
async def main():
- client = Client()
+ client = AsyncClient()
image = requests.get("https://raw.githubusercontent.com/xtekky/gpt4free/refs/heads/main/docs/cat.jpeg", stream=True).raw
- response = await client.chat.completions.async_create(
+ response = await client.chat.completions.create(
model=g4f.models.default,
provider=g4f.Provider.Bing,
messages=[
@@ -180,12 +181,12 @@ asyncio.run(main())
**Generate images using a specified prompt:**
```python
import asyncio
-from g4f.client import Client
+from g4f.client import AsyncClient
async def main():
- client = Client()
+ client = AsyncClient()
- response = await client.images.async_generate(
+ response = await client.images.generate(
prompt="a white siamese cat",
model="flux"
)
@@ -201,12 +202,12 @@ asyncio.run(main())
#### Base64 Response Format
```python
import asyncio
-from g4f.client import Client
+from g4f.client import AsyncClient
async def main():
- client = Client()
+ client = AsyncClient()
- response = await client.images.async_generate(
+ response = await client.images.generate(
prompt="a white siamese cat",
model="flux",
response_format="b64_json"
@@ -224,13 +225,13 @@ asyncio.run(main())
**Execute multiple tasks concurrently:**
```python
import asyncio
-from g4f.client import Client
+from g4f.client import AsyncClient
async def main():
- client = Client()
+ client = AsyncClient()
- task1 = client.chat.completions.async_create(
- model="gpt-4o-mini",
+ task1 = client.chat.completions.create(
+ model=None,
messages=[
{
"role": "user",
@@ -239,18 +240,21 @@ async def main():
]
)
- task2 = client.images.async_generate(
+ task2 = client.images.generate(
model="flux",
prompt="a white siamese cat"
)
- chat_response, image_response = await asyncio.gather(task1, task2)
-
- print("Chat Response:")
- print(chat_response.choices[0].message.content)
-
- print("Image Response:")
- print(image_response.data[0].url)
+ try:
+ chat_response, image_response = await asyncio.gather(task1, task2)
+
+ print("Chat Response:")
+ print(chat_response.choices[0].message.content)
+
+ print("\nImage Response:")
+ print(image_response.data[0].url)
+ except Exception as e:
+ print(f"An error occurred: {e}")
asyncio.run(main())
```
@@ -286,7 +290,7 @@ client = AsyncClient(provider=g4f.Provider.OpenaiChat)
# or
-response = await client.chat.completions.async_create(
+response = await client.chat.completions.create(
model="gpt-4",
provider=g4f.Provider.Bing,
messages=[
@@ -306,7 +310,7 @@ Implementing proper error handling and following best practices is crucial when
1. **Use try-except blocks to catch and handle exceptions:**
```python
try:
- response = await client.chat.completions.async_create(
+ response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{
@@ -368,7 +372,7 @@ logger = logging.getLogger(__name__)
async def make_api_call():
try:
- response = await client.chat.completions.async_create(...)
+ response = await client.chat.completions.create(...)
logger.info(f"API call successful. Tokens used: {response.usage.total_tokens}")
except Exception as e:
logger.error(f"API call failed: {e}")
@@ -387,7 +391,7 @@ def get_cached_response(query):
```
## Conclusion
-The G4F async client API provides a powerful and flexible way to interact with various AI models asynchronously. By leveraging its features and following best practices, you can build efficient and responsive applications that harness the power of AI for text generation, image analysis, and image creation.
+The G4F AsyncClient API provides a powerful and flexible way to interact with various AI models asynchronously. By leveraging its features and following best practices, you can build efficient and responsive applications that harness the power of AI for text generation, image analysis, and image creation.
Remember to handle errors gracefully, implement rate limiting, and monitor your API usage to ensure optimal performance and reliability in your applications.