Integrate Open Source Models into Your Existing Apps Without Any Code Change
product
Jun 08, 2023

Integrate Open Source Models into Your Existing Apps Without Any Code Change

Ce Gao
Co-founder

While many developers are utilizing the OpenAI GPT API to develop their applications, some may also wish to experiment with various open-source models. However, integrating different open-source models can be challenging due to the differences in their APIs and GPU requirements.

An approach that could help simplify the integration of open-source models is by making use of Modelz, a platform that offers OpenAI-compatible APIs for a variety of language models. By utilizing Modelz, developers can seamlessly switch between different language models without needing to make significant changes to their code.

Use open-source LLMs in SkyAGI

I'd be happy to illustrate how Modelz could simplify the workflow for integrating different open-source models using SkyAGI as an example.

SkyAGI is a python package that showcases the emerging capability of LLMs in simulating believable human behaviors. SkyAGI implements the idea of Generative Agents and delivers a role-playing game that creates a very interesting user experience.

SkyAGI utilizes LangChain and OpenAI API to generate the characters' outputs. However, if a developer wishes to experiment with different open-source models, they would need to modify the code to integrate the new models and their respective APIs. This process can be challenging and time-consuming.

Alternatively, developers can deploy a OpenAI compatible API instance on Modelz, then just replace OpenAI_API_BASE and OPENAI_API_KEY to use the instance on Modelz.

export OpenAI_API_BASE=https://bloomz-webhiq5i9dagphhu.modelz.io
# Use your API key on Modelz
export OPENAI_API_KEY=mzi-abcdefg...

Or you could declare the base host and api key with OpenAI python package:

import openai
openai.api_base="https://bloomz-webhiq5i9dagphhu.modelz.io"
openai.api_key="mzi-abcdefg..."

Use local deployed instance

It also works if you want to run the server locally. You could just install our open-source python package modelz-llm and start it locally:

pip install modelz-llm
modelz-llm -m bigscience/bloomz-560m --device cpu

Then you could set the environment variables to your local deployed instance:

export OpenAI_API_BASE=http://localhost:8080
# Use your API key on Modelz
export OPENAI_API_KEY=mzi-abcdefg...

Community

Our open-source project, modelz-llm, supports the OpenAI-compatible API, providing developers with a platform to experiment with various language models seamlessly. With the help of modelz-llm, developers can easily switch between different language models without requiring significant code changes and take advantage of the compatible API. You could deploy it on Modelz with just one click, or just use it locally.

Furthermore, we welcome contributions to the project, which could help improve its functionality, expand its capabilities, and make it an even more valuable resource for developers. Our community is always looking for enthusiasts to help in all different ways:

  • ⚽ Simply start using modelz-llm: Go through the README. Does everything work as expected? If not, we're always looking for improvements. Chat with us on 💬 Discord or file an issue on GitHub.
  • 🙋 Triaging Issues and Pull Requests: One great way you can contribute to the project without writing any code is to help triage issues and pull requests as they come in.
  • 👨‍💻 Contribute to modelz-llm: Support more open-source LLMs, or help improve the performance, or support more APIs.
  • 💬 Discuss envd on Discord: Chat with us on 💬 Discord.

As we continue to explore the possibilities of AI and its impact on our world, I wish you a great journey in your pursuit of knowledge and innovation. Whether you are just getting started with AI or are an experienced professional, the field of AI offers endless opportunities for growth and discovery.