RimWorld

RimWorld

77 ratings
RimAI Framework
2
   
Award
Favorite
Favorited
Unfavorite
Mod, 1.5, 1.6
File Size
Posted
Updated
1.146 MB
19 Jul @ 12:05am
1 Sep @ 12:33pm
9 Change Notes ( view )

Subscribe to download
RimAI Framework

Description
🤖 RimAI Framework - AI-Powered RimWorld Experience

🔧 Core Framework Module
The RimAI Framework is the foundational core of the entire RimAI ecosystem. It is a dependency that handles all communication with Large Language Models (LLMs) and provides a comprehensive API for other content mods.

Github (Open Source):github.com/oidahdsah0/Rimworld_AI_Framework

⚡ V4.1 Key Features
  • 🔌 Data-Driven: Connect to any AI provider (OpenAI, Ollama, Groq, etc.) via simple JSON templates.
  • 🔄 End-to-End Streaming: A fully-featured streaming API for real-time, word-by-word responses.
  • First-Class Embedding Support: High-performance API for complex semantic understanding and memory functions.
  • 📊 Advanced Batching: Optimized concurrent requests for chat and embeddings to maximize throughput.
  • 🏠 Full support for local OpenAI-compatible APIs (Ollama, vLLM, etc.)

🔑 IMPORTANT: Setup Required Before Use

⚠️ You MUST configure the mod settings before use! ⚠️

Step-by-Step Setup Guide:
  1. Enable the mod and restart RimWorld.
  2. Go to Settings → Mod Settings → RimAI Framework.
  3. Fill in the required fields:
    • API Key: Your key for services like OpenAI. (Leave empty for local providers like Ollama).
    • Endpoint URL: The base URL for the API. This is usually pre-filled for you. Only change it if you have a specific need or the official URL changes. (e.g., `https://api.openai.com/v1` for OpenAI, `http://localhost:11434/v1` for local Ollama).
    • Model Name: The exact model name (e.g., `gpt-4o-mini`, `llama3`).
  4. Use the Test Connection button to verify your settings.
  5. Save your configuration. You're ready to go!

💡 Quick Start Recommendations:
  • Free option: Install Ollama locally with a model like `llama3`.
  • Budget option: Use OpenAI's `gpt-4o-mini` model (very affordable, ~$0.15 per 1M tokens).

💰 Important Cost Notice
⚠️ Token costs are paid directly to your AI service provider, NOT to the mod author! ⚠️ The mod author receives no payment from your API usage. Local models like Ollama are free to run after initial setup.

📋 Important Notice
This framework itself does not add any gameplay content but is a required dependency for all other RimAI modules.

🎯 Supported Versions
✅ RimWorld 1.5
✅ RimWorld 1.6

🛡️ Open Source & Security
This project is completely open-source. You can review the source code, contribute, and report issues on our GitHub repository: https://github.com/oidahdsah0/Rimworld_AI_Framework

🔥 If you enjoy this project, please give it a thumbs-up 👍 and follow ➕ for updates on more RimAI modules!
26 Comments
Dragonissa 22 Sep @ 10:30am 
Can't test locally hosted Ollama because it complains about missing the API Key.
Drunken Fish 10 Sep @ 5:58pm 
Looking at your chat template in BuiltInTemplates.cs, likely fix it if you got rid of typical_p from the gemini template.
Drunken Fish 10 Sep @ 11:36am 
Im using Gemini as my LLM.

Calls to the API fail with:

Invalid JSON payload received. Unknown name "typical_p": Cannot find field.


This causes the framework to receive an error array instead of a JSON object, resulting in a parse failure (Failed to parse standard JSON response: Error reading JObject from JsonReader. Current JsonReader item is not an object: StartArray).

Steps to reproduce:

Enable RimAI.Framework.
Trigger any AI chat response.
Observe log error.

Expected behavior:
Valid JSON request payload should be sent without unsupported fields.

Actual behavior:
Request includes "typical_p", which is not accepted by the target API, breaking all responses.

Notes:

"typical_p" is not a valid field for the current API for Gemini Flash at least. Removing it should resolve the issue.
sleider 9 Sep @ 6:33pm 
I have the same problem as @Central. I'm using a Gemini API key.
Astora 4 Sep @ 10:29am 
I managed to get Ollama to work by setting my API key to 1, pretty sure it just needs it to not be blank for some reason
KiloKio  [author] 2 Sep @ 7:42pm 
@Central

You might be right. I'll test this issue later and, if there are any problems, I'll try to fix them as soon as possible. Thank you for the feedback.
Central 2 Sep @ 7:42pm 
Also can't seem to get a local model (Ollama) to work..
Central 2 Sep @ 7:21pm 
Part of the issue might be some issues with parameters? I think some of the old parameter settings are incompatible perhaps?
KiloKio  [author] 2 Sep @ 6:29pm 
@Central

Hello,

The issue seems to be with the message "Success! Response: Unkn...". Although OpenAI returned a 200 success code, the rest of the message should say "Unknown something...". This indicates that something in your request is unknown to OpenAI. It could be the URL or the API key. If you're certain that your key is correct, the problem is most likely with the URL.

I haven't been able to test the OpenAI content myself yet since I'm not in a region where their services are available. However, I will try to find a working OpenAI service to test it with as soon as possible.
Central 2 Sep @ 6:07pm 
Hey, I'm getting some weird behavior when trying to use this. When I put in my API key for OpenAI and run the test, I get "Success! Response: Unkn..." and no functions of RimAICore seem to give any responses. It's clearly *trying* but it isn't functioning.