DEPRECATED: This version of the SDK and the API have been deprecated. To try out our latest API and SDK in beta, please contact us at contactus@evaluable.ai
To install the Evaluable AI SDK, open your terminal and run:
pipinstallevaluableai
OpenAI
Usage
In your code, you only need to change the import from openai to evaluableai for client creation.
import osfrom evaluableai import OpenAIclient =OpenAI(# This is the default and can be omitted api_key=os.environ.get("OPENAI_API_KEY"), evaluableai_params={"token": "EVALUABLEAI_API_KEY" })chat_completion = client.chat.completions.create( messages=[ {"role": "user","content": "Say this is a test", } ], model="gpt-3.5-turbo",)
While you can provide an api_key keyword argument, we recommend using python-dotenv to add your keys: EVALUABLEAI_API_KEY="My API Key" to your .env file so that your API Key is not stored in source control.
Async usage
In your code, you only need to change the import from openai to evaluableai for client creation. Simply import AsyncOpenAI instead of OpenAI and use await with each API call.
import osimport asynciofrom evaluableai import AsyncOpenAIclient =AsyncOpenAI(# This is the default and can be omitted api_key=os.environ.get("OPENAI_API_KEY"), evaluableai_params={"token": "EVALUABLEAI_API_KEY" })asyncdefmain() ->None: chat_completion =await client.chat.completions.create( messages=[ {"role": "user","content": "Say this is a test", } ], model="gpt-3.5-turbo", )asyncio.run(main())
Functionality between the synchronous and asynchronous clients is otherwise identical.
Mistral AI
Usage
In your code, you only need to change the import from mistralai to evaluableai for client creation.
from evaluableai import MistralClientfrom mistralai.models.chat_completion import ChatMessageapi_key = os.environ["MISTRAL_API_KEY"]model ="mistral-large-latest"client =MistralClient(api_key=api_key)messages = [ChatMessage(role="user", content="What is the best French cheese?")]# No streamingchat_response = client.chat( model=model, messages=messages,)print(chat_response.choices[0].message.content)
Async usage
In your code, you only need to change the import from mistralai to evaluableai for client creation. Simply import MistralAsyncClient instead of MistralClient and use await with each API call:
import asyncioimport osfrom evaluableai import MistralAsyncClientfrom mistralai.models.chat_completion import ChatMessageasyncdefmain(): api_key = os.environ["MISTRAL_API_KEY"] model ="mistral-tiny" client =MistralAsyncClient( api_key=api_key, evaluableai_params={"token": "EVALUABLEAI_API_KEY", } ) messages = [ChatMessage(role="user", content="What is the best French cheese?") ]# Await the async chat method async_response =await client.chat(model=model, messages=messages)# Assuming async_response has a method or attribute to retrieve the response content# The specific way to print or process the response will depend on the implementation# of your ChatCompletionResponse class and the data structure it provides.print("Response:", async_response)if__name__=="__main__": asyncio.run(main())