Agent-LLM Docs
  • CONTRIBUTION
  • Building Prompts for Plugin System
  • Common Questions & Answers
  • Agent-LLM (Large Language Model)
  • concepts
    • Agents
    • Chains
    • Prompts
    • Providers
  • providers
    • BARD
    • BING
    • FASTCHAT
    • GPT4ALL
    • GPT4FREE
    • GPUGPT4ALL
    • HUGGINGCHAT
    • KOBOLD
    • LLAMACPP
    • OOBABOOGA
    • OPENAI
    • PALM
Powered by GitBook
On this page
  • AI Provider: GPT4All
  • Quick Start Guide
  • Update your agent settings
  1. providers

GPUGPT4ALL

PreviousGPT4FREENextHUGGINGCHAT

Last updated 2 years ago

AI Provider: GPT4All

Quick Start Guide

INTEL MAC NOT SUPPORTED BY GPT4ALL

Note: AI_MODEL should stay default unless there is a folder in model-prompts specific to the model that you're using. You can also create one and add your own prompts.

Update your agent settings

  1. Set AI_PROVIDER to gpt4all or gpugpt4all if wanting to run with GPU.

  2. Set MODEL_PATH to the path of your model.

  3. Set AI_MODEL to default or the name of the model from the model-prompts folder.

  4. Set AI_TEMPERATURE to a value between 0 and 1. The higher the value, the more creative the output.

  5. Set MAX_TOKENS to the maximum number of tokens to generate. The higher the value, the longer the output.

GPT4All
Agent-LLM