GPT4ALL
Last updated
Last updated
INTEL MAC NOT SUPPORTED BY GPT4ALL
Note: AI_MODEL should stay default
unless there is a folder in model-prompts
specific to the model that you're using. You can also create one and add your own prompts.
Set AI_PROVIDER
to gpt4all
or gpugpt4all
if wanting to run with GPU.
Set MODEL_NAME
to the name of the model such as gpt4all-lora-quantized
.
Set AI_MODEL
to default
or the name of the model from the model-prompts
folder.
Set AI_TEMPERATURE
to a value between 0 and 1. The higher the value, the more creative the output.
Set MAX_TOKENS
to the maximum number of tokens to generate. The higher the value, the longer the output.