Agent-LLM (Large Language Model)
Last updated
Last updated
Please use the outreach email for media, sponsorship, or to contact us for other miscellaneous purposes.
Do not send us emails with troubleshooting requests, feature requests or bug reports, please direct those to GitHub Issues or Discord.
Agent-LLM is an Artificial Intelligence Automation Platform designed to power efficient AI instruction management across multiple providers. Our agents are equipped with adaptive memory, and this versatile solution offers a powerful plugin system that supports a wide range of commands, including web browsing. With growing support for numerous AI providers and models, Agent-LLM is constantly evolving to empower diverse applications.
You're welcome to disregard this message, but if you do and the AI decides that the best course of action for its task is to build a command to format your entire computer, that is on you. Understand that this is given full unrestricted terminal access by design and that we have no intentions of building any safeguards. This project intends to stay light weight and versatile for the best possible research outcomes.
Please note that using some AI providers (such as OpenAI's GPT-4 API) can be expensive! Monitor your usage carefully to avoid incurring unexpected costs. We're NOT responsible for your usage under any circumstance.
This project is under active development and may still have issues. We appreciate your understanding and patience. If you encounter any problems, please first check the open issues. If your issue is not listed, kindly create a new issue detailing the error or problem you experienced. Thank you for your support!
Adaptive Memory Management: Efficient long-term and short-term memory handling for improved AI performance.
Versatile Plugin System: Extensible command support for various AI models, ensuring flexibility and adaptability.
Multi-Provider Compatibility: Seamless integration with leading AI providers, including OpenAI GPT series, Hugging Face Huggingchat, GPT4All, GPT4Free, Oobabooga Text Generation Web UI, Kobold, llama.cpp, FastChat, Google Bard, Bing, and more. Run any model with Agent-LLM!
Web Browsing & Command Execution: Advanced capabilities to browse the web and execute commands for a more interactive AI experience.
Code Evaluation: Robust support for code evaluation, providing assistance in programming tasks.
Docker Deployment: Effortless deployment using Docker, simplifying setup and maintenance.
Audio-to-Text Conversion: Integration with Hugging Face for seamless audio-to-text transcription.
Platform Interoperability: Easy interaction with popular platforms like Twitter, GitHub, Google, DALL-E, and more.
Text-to-Speech Options: Multiple TTS choices, featuring Brian TTS, Mac OS TTS, and ElevenLabs.
Expanding AI Support: Continuously updated to include new AI providers and services.
AI Agent Management: Streamlined creation, renaming, deletion, and updating of AI agent settings.
Flexible Chat Interface: User-friendly chat interface for conversational and instruction-based tasks.
Task Execution: Efficient starting, stopping, and monitoring of AI agent tasks with asynchronous execution.
Chain Management: Sophisticated management of multi-agent task chains for complex workflows and collaboration.
Custom Prompts: Easy creation, editing, and deletion of custom prompts to standardize user inputs.
Command Control: Granular control over agent abilities through enabling or disabling specific commands.
RESTful API: FastAPI-powered RESTful API for seamless integration with external applications and services.
The frontend web application of Agent-LLM provides an intuitive and interactive user interface for users to:
Manage agents: View the list of available agents, add new agents, delete agents, and switch between agents.
Set objectives: Input objectives for the selected agent to accomplish.
Start tasks: Initiate the task manager to execute tasks based on the set objective.
Instruct agents: Interact with agents by sending instructions and receiving responses in a chat-like interface.
Available commands: View the list of available commands and click on a command to insert it into the objective or instruction input boxes.
Dark mode: Toggle between light and dark themes for the frontend.
Built using NextJS and Material-UI
Communicates with the backend through API endpoints
Clone the repositories for the Agent-LLM front/back ends then start the services with Docker.
Access the web interface at http://localhost:3000
As a reminder, this can be dangerous to run locally depending on what commands you give your agents access to. ⚠️ Run this in Docker or a Virtual Machine!
Clone the repository for the Agent-LLM back end and start it.
Clone the repository for the Agent-LLM front end in a separate terminal and start it.
Access the web interface at http://localhost:3000
Agent-LLM utilizes a .env
configuration file to store AI language model settings, API keys, and other options. Use the supplied .env.example
as a template to create your personalized .env
file. Configuration settings include:
WORKING_DIRECTORY: Set the agent's working directory.
EXTENSIONS_SETTINGS: Configure settings for OpenAI, Hugging Face, Selenium, Twitter, and GitHub.
VOICE_OPTIONS: Choose between Brian TTS, Mac OS TTS, or ElevenLabs for text-to-speech.
For a detailed explanation of each setting, refer to the .env.example
file provided in the repository.
Agent-LLM provides several API endpoints for managing agents, prompts and chains.
To learn more about the API endpoints and their usage, visit the API documentation at
This documentation is hosted locally and the frontend must be running for these links to work.
To introduce new commands, generate a new Python file in the commands
folder and define a class inheriting from the Commands
class. Implement the desired functionality as methods within the class and incorporate them into the commands
dictionary.
Each agent will have its own AI provider and provider settings such as model, temperature, and max tokens, depending on the provider. You can use this to make certain agents better at certain tasks by giving them more advanced models to complete certain steps in chains.
We're always looking for ways to improve Agent-LLM and make it more useful for our users. Your support will help us continue to develop and enhance the application. Thank you for considering to support us!
This project was inspired by and is built using code from the following open-source repositories:
Please consider exploring and contributing to these projects if you like what we are doing.
We welcome contributions to Agent-LLM! If you're interested in contributing, please check out our contributions guide the open issues on the backend, open issues on the frontend and pull requests, submit a pull request, or suggest new features. To stay updated on the project's progress, , and . Also feel free to join our .
We appreciate any support for Agent-LLM's development, including donations, sponsorships, and any other kind of assistance. If you would like to support us, please contact us through our , or .
Josh (@Josh-XT) | James (@JamesonRGrieve) |
---|---|