RimWorld

RimWorld

RimAI Framework
26 Comments
Dragonissa 22 Sep @ 10:30am 
Can't test locally hosted Ollama because it complains about missing the API Key.
Drunken Fish 10 Sep @ 5:58pm 
Looking at your chat template in BuiltInTemplates.cs, likely fix it if you got rid of typical_p from the gemini template.
Drunken Fish 10 Sep @ 11:36am 
Im using Gemini as my LLM.

Calls to the API fail with:

Invalid JSON payload received. Unknown name "typical_p": Cannot find field.


This causes the framework to receive an error array instead of a JSON object, resulting in a parse failure (Failed to parse standard JSON response: Error reading JObject from JsonReader. Current JsonReader item is not an object: StartArray).

Steps to reproduce:

Enable RimAI.Framework.
Trigger any AI chat response.
Observe log error.

Expected behavior:
Valid JSON request payload should be sent without unsupported fields.

Actual behavior:
Request includes "typical_p", which is not accepted by the target API, breaking all responses.

Notes:

"typical_p" is not a valid field for the current API for Gemini Flash at least. Removing it should resolve the issue.
sleider 9 Sep @ 6:33pm 
I have the same problem as @Central. I'm using a Gemini API key.
Astora 4 Sep @ 10:29am 
I managed to get Ollama to work by setting my API key to 1, pretty sure it just needs it to not be blank for some reason
KiloKio  [author] 2 Sep @ 7:42pm 
@Central

You might be right. I'll test this issue later and, if there are any problems, I'll try to fix them as soon as possible. Thank you for the feedback.
Central 2 Sep @ 7:42pm 
Also can't seem to get a local model (Ollama) to work..
Central 2 Sep @ 7:21pm 
Part of the issue might be some issues with parameters? I think some of the old parameter settings are incompatible perhaps?
KiloKio  [author] 2 Sep @ 6:29pm 
@Central

Hello,

The issue seems to be with the message "Success! Response: Unkn...". Although OpenAI returned a 200 success code, the rest of the message should say "Unknown something...". This indicates that something in your request is unknown to OpenAI. It could be the URL or the API key. If you're certain that your key is correct, the problem is most likely with the URL.

I haven't been able to test the OpenAI content myself yet since I'm not in a region where their services are available. However, I will try to find a working OpenAI service to test it with as soon as possible.
Central 2 Sep @ 6:07pm 
Hey, I'm getting some weird behavior when trying to use this. When I put in my API key for OpenAI and run the test, I get "Success! Response: Unkn..." and no functions of RimAICore seem to give any responses. It's clearly *trying* but it isn't functioning.
KiloKio  [author] 2 Sep @ 4:36pm 
@Auster

After a month of intensive development and testing, LM Studio is now built into the selection menu. You can test it out through the new and complete RimAI Core (BETA) project.

I don't have a Mac environment, so I haven't been able to test it directly myself, but I believe it should work without major issues. Please feel free to use it, and if you encounter any problems, I'll prioritize a fix within 24 hours.
KiloKio  [author] 2 Sep @ 4:32pm 
@ Thanks for the suggestion!

Incorporating KoboldCpp directly into this mod would introduce a lot of engineering challenges.

Using llama.cpp directly also presents many installation difficulties for users.

Our current solution is to recommend using Ollama to load GGUF models, as it's already natively supported by this mod.
KiloKio  [author] 2 Sep @ 4:26pm 
@hwndk Thanks for the suggestion!

I've noted your request to add Mistral and Cloudflare to the service selection. I'll check their APIs first. If they support the OpenAI standard, I can add them directly. If not, it might take a bit longer as I'll need to adapt the integration.

In any case, thanks for your interest, and I hope you continue to enjoy using the mod!
hwndk 2 Sep @ 1:34pm 
hi, can you add Mistral and Cloudflare to the service selection in the settings?
Auster 3 Aug @ 7:27am 
@KiloKio
Thanks for the reply, the only reason I was asking, there is another mod called EchoColony. I did a basic guide on how to setup LM Studio.
KiloKio  [author] 2 Aug @ 7:39pm 
@Auster

Of course, but a more general template is under development to make this framework compatible with Ollama, vLLM, SGLang, and ML Studio. This is because some specific smaller models have unique parameters, such as Qwen3 with its "thought switch." Passing this parameter is quite tricky, and I need to account for it as well.
Auster 2 Aug @ 5:52pm 
@KiloKio
Can you use LM Studio with this mod?
KiloKio  [author] 2 Aug @ 7:43am 
This version is quite problematic and contains numerous bugs. I'm in the process of a complete overhaul and redesign. I suggest waiting for the next release, which should be available soon.
тетеря, блин 30 Jul @ 7:00am 
while navigatiing mod menu in the main menu:

https://gist.github.com/HugsLibRecordKeeper/b246824cfda8a1727b6820ffca6fc400

Exception filling window for LudeonTK.EditWindow_Log: System.InvalidOperationException: Collection was modified; enumeration operation may not execute.
[Ref 4A33AD84]
тетеря, блин 29 Jul @ 3:48pm 
what a potential. finally the reason to have good GPU to play RimWorld. AI-Driven story generation is what I like the most. Does it already work? I really wish something like that could work with something like Custom Quest Framework to generate long and logical chains of quests.
Nekko 26 Jul @ 4:33am 
I'm feeling myself really dumb, but i can't connect my local LLM model with API. I'm doing everything like instructed, and RimAI just keeps saying to check my api key or endpoint, which i did!
runningInThe20s 21 Jul @ 2:10pm 
if you are getting angry about other people playing around with llms in free game mods it might be time for self reflection.
pklemming 19 Jul @ 3:03am 
It is nice you support ollama. Will try it later. Responses, even local can be slow, but it is improving with time.
Vexacuz 19 Jul @ 1:49am 
Its a framework mod guys. It says right in the middle.
tide{S}haper industries 19 Jul @ 1:33am 
So... you don't know what it is, even after the AI tried to explain it like five times?
SOMETHING must have failed then, obviously. :D
Wonder how long it takes for the 'AI' to realize that ** also doesn't make bold text on Steam WS pages. ;)
SanguinarcAQL 19 Jul @ 12:59am 
Very interesting, can't wait to see what will be done with this