RimWorld

RimWorld

Rimtalk
 This topic has been pinned, so it's probably important
Juicy  [developer] 5 Sep @ 4:03pm
RimTalk AI Setup Guide
RimTalk AI Setup Guide
RimTalk allows your RimWorld colonists to “talk” using an AI brain. You can use either a Cloud Brain (online) or a Local Brain (on your computer).


1. Cloud Brain (Online)
Pros: Powerful, easy setup, low local CPU usage
Cons: Requires an API key, may cost money
  • Recommended default: Google Gemini
  • Obtain a Google API key at https://aistudio.google.com/app/apikey
  • Paste the API key into RimWorld → Options → Mod Settings → RimTalk → Basic → API Key
  • Your colonists will now talk using the cloud AI.


2. Local Brain (On Your PC)
Pros: Free to run, works offline, completely private
Cons: Needs a good CPU/GPU, setup is slightly more involved
To use a Local Brain, you need to run an AI model on your own machine. RimTalk connects to it via a local API.
  1. Switch to Advanced Settings: In RimTalk settings → Basic tab → click the settings button to toggle from "Simple" to "Advanced".
  2. Select the "Local Provider" radio button.


3. How to Choose a Local Model (VRAM is Key)
Running an AI model locally requires significant computer resources, primarily Video RAM (VRAM) from your GPU. Choosing the right model is a balance between performance and the quality of the dialogue.
  • What is a Model Size? A model's size is measured in "parameters" (e.g., 3B for 3 billion). Larger models are often "smarter" but require more VRAM.
  • What is Quantization? To run on consumer hardware, models are compressed in a process called quantization. This makes them much smaller. You will see different versions of the same model (e.g., Q4_K_M, Q5_K_M). Higher numbers generally mean better quality but larger VRAM usage.
  • How to Check Your VRAM: In Windows, open Task Manager (Ctrl+Shift+Esc), go to the "Performance" tab, and select your GPU to see your dedicated VRAM.
General VRAM Guidelines:
  • Less than 6 GB VRAM: Stick to smaller models, typically in the 3 Billion (3B) parameter range. Look for heavily quantized versions (e.g., Q4_K_M or lower).
  • 8 GB - 12 GB VRAM: This is the sweet spot for many gaming PCs. You can comfortably run popular 7B to 9B parameter models, which offer a great balance of quality and performance.
  • 16 GB VRAM or more: You can run larger, more capable models, such as those in the 13B parameter range, with high-quality quantization for better dialogue.


4. LM Studio Local AI Setup
  • Download LM Studio here: https://lmstudio.ai/
  • Open LM Studio and go to the "Search" tab (🔍).
    • Search for a model that fits your hardware. Good options for dialogue - "Gemma3 12B GGUF 4_K_M".
    • Check VRAM requirements: Before downloading, look at the list on the right. LM Studio provides an estimated VRAM/RAM usage for each file version. Choose one that fits comfortably within your VRAM.
    • Click "Download".
  • Go to the "Local Server" tab (🔌).
    • Select your downloaded model from the dropdown menu at the top.
    • Click "Start Server".
  • LM Studio will show your Base URL, which is usually: http://localhost:1234 or http://127.0.0.1:1234
  • In RimTalk settings → API tab → Local Provider Configuration:
    • Base URL: Paste LM Studio’s URL.
    • Model: The model name is not required for LM Studio, as the loaded model is handled by the server. You can leave this field blank.
    • AI Cooldown: Adjust to your preference (e.g., 1–2 seconds).
  • Keep LM Studio running in the background while you play RimWorld.


5. Ollama Local AI Setup
  • Install Ollama here: https://ollama.com/
  • Find a model on the Ollama Library[ollama.com]. Choose one appropriate for your VRAM (e.g., gemma3:12b).
  • Open a terminal (Command Prompt or PowerShell) and pull the model. For example:
    ollama pull gemma3:12b
  • IMPORTANT: Find your exact model name. Ollama uses specific tags (like :latest or :12b). In the terminal, run the following command to see all the models you have installed:
    ollama list
  • You will see a list of your models. Copy the exact name from the NAME column.
  • Start the server in the terminal (if it's not already running) with this command:
    ollama serve
  • The Base URL for Ollama is usually: http://localhost:11434 or http://127.0.0.1:1234
  • In RimTalk settings → API tab → Local Provider Configuration:
    • Base URL: Paste the Ollama server URL.
    • Model: Enter the exact model name you found using the ollama list command (e.g., gemma3:12b).
    • AI Cooldown: Adjust to your preference.
  • Keep the Ollama terminal window running in the background while you play.
Last edited by Juicy; 6 Sep @ 2:00pm
< >
Showing 1-1 of 1 comments
Local pooling systems for online providers like OneAPI combine multiple API keys into one that can be used like any mainstream providers' key. But that need to specific both BaseUrl and API key, which RimTalk does not allowed.
< >
Showing 1-1 of 1 comments
Per page: 1530 50