RimWorld

RimWorld

OpenRimWorldAI
60 Comments
Darksy Lin 12 Aug @ 10:53pm 
The best mod ever. Please update it for 1.6, it's a really cool mod.
Rage 21 Jun @ 11:17pm 
request you make a api adaption for anthropic itself. claude systems would do wonders and i refuse to constantly pay for shoddymiddlemen everytime its ai related. at the current moment anthropic is the only dev of a good ai system.
Krypt 23 May @ 8:53pm 
I can't restrain myself from making this absolutely unconstructiove comment:
> OpenRouter (Recommended) vs Ollama (Not Recommended)

Aaand this line is the reason I will not use this mod, but rather implement my own.
GrimStrider 15 Apr @ 12:27am 
The ai seems to be very confused about what gender my pawns are XD
SenjuWoo 10 Mar @ 2:23pm 
All good
LoneLemone  [author] 10 Mar @ 2:17pm 
I don't really have the time right now. Between vacation and business travel I'm a little too busy right now.
Bleeding Eyes Mcgee 8 Mar @ 1:33pm 
It seems to have been removed, but a while ago the description mentioned plans for another addon that deals with moods, relationships and conversations. Is there still a chance that it might come out eventually, or was it fully cancelled?
SenjuWoo 18 Feb @ 9:06pm 
Ir would be nice if you made an other expantion where the ai could generate names for pawns and stuff knowing their names and all their info from the game and well seed names, world names, tribe names ect...
LoneLemone  [author] 15 Feb @ 6:21pm 
Hey everyone,

The base mod here functions pretty well. I've been putting a lot of work into the "Overlord" expansion and being entertained with how chaotic the AI is. See: https://gtm.steamproxy.vip/sharedfiles/filedetails/?id=3420778796

With the base mod performing well with OpenRouter, I'm considering adding a few additional API Sources:
- Google (Gemini)
- OpenAI (GPT)
- Microsoft (Copilot)

Not sure if OpenAI/Microsoft offer a dev key with free usage, but I know Google does for Gemini.

OpenRouter will still remain the default due it's capability of supporting a broad range of models and easy pay-model for those that choose to use it.
2922608187 12 Feb @ 11:35am 
thanks,got it
LoneLemone  [author] 12 Feb @ 11:29am 
@2922608187
Not sure this mod really warrants a discord, but if you want to reach me outside of Steam, here's my personal discord channel: discord (.) gg/BbxXCqsW
LoneLemone  [author] 12 Feb @ 11:20am 
@2922608187
I can look into it. I'm only really working on this in the free-time I have which is usually during the weekends.

You can set the AI Debug Level to 1 in the OpenRimWorldAI menu. In the Debug window you can see the data that was sent to the AI, and the AI's responses.
2922608187 12 Feb @ 2:47am 
Can you create a separate log UI to view the behavior history of the AI? I'd like to collect the processed data to refine the RimWorld Background for use with local models.
And can you create a Discord server related to this mod?
2922608187 9 Feb @ 7:50pm 
Thanks,this will allow for more possibilities
SenjuWoo 9 Feb @ 1:01pm 
Thanks
LoneLemone  [author] 9 Feb @ 1:00pm 
@2922608187 & @SenjuWoo
- Ollama has been added. I've been testing it, but it's a bit harsh on my computer. At most I can run Phi4 which is about 14b.

I also added a new setting, "RimWorld Background" since not all models have RimWorld knowledge in the trained data. This front-loads "What is RimWorld" information for better results.

Some models don't like to roleplay well...

<defName>RimWorld_Background</defName> if you guys think there's more context that could help.

Good-luck with Ollama-- I still wouldn't recommended :)
SenjuWoo 8 Feb @ 6:09pm 
Thanks for the update!
SenjuWoo 8 Feb @ 6:09pm 
Local models are kinda disappointing tho just use deepseek
2922608187 8 Feb @ 5:58pm 
Expect it
SenjuWoo 8 Feb @ 4:05pm 
@LoneLemone thanks man I appreciate it
Szczygiel 8 Feb @ 3:38pm 
ollama would be cool, it has local variant, I already have it. But it can tank performance a little bit.
LoneLemone  [author] 8 Feb @ 2:33pm 
@SenjuWoo - I'm converting the prompts to Defs. I've been testing it this week. I will be updating shortly.

@2922608187 - I'm actually going to look into this for the next update.
LoneLemone  [author] 8 Feb @ 2:30pm 
@2922608187
It is possible. I believe Ollama also has a local API that can be called, and I have experimented with this in the past. Not sure how stable it would be for the amount of data this mod is delivering.
2922608187 8 Feb @ 3:37am 
Is it possible to allow this mod to use local ai models? Such as using ollama
SenjuWoo 5 Feb @ 12:37pm 
Please add a way so we can edit in a preamble to tell exactly what to write down in the report :happyio:
LoneLemone  [author] 31 Jan @ 4:23pm 
I always come back with the milk.
Szczygiel 31 Jan @ 10:35am 
But you mean week, you aint gone for milk like my dad?
Szczygiel 27 Jan @ 1:20am 
And another hour ago. I will give it try later and tell you if I discovered something off.
LoneLemone  [author] 26 Jan @ 12:51pm 
@Szczygiel - FYI, I just released an update. Let me know if that changes any of the results.
LoneLemone  [author] 26 Jan @ 12:51pm 
@Szczygiel - I appreciate the feedback.

Gemini is the default model.

I just updated the prompts to be more concise. I don't know if you've tried unchecking the "Use Free" models box to see how many other models there are, but I can't imagine myself testing every single one of those... maybe the free models sure, but.... yeah :D
Szczygiel 26 Jan @ 12:35pm 
So from free models Mistral doesnt work, too much tokens. Openchat works but its bugged and gives weird raports consisting of wall of %%.
Llama acts weird, tries to generate something with sense and roleplay but fails midway and breaks the structure of report. Mythmax also doesnt work.
So, as mod author, could you test these free models on your end on free api on how they act, what they do, what dont and why?
It seems there are at least few different errors on different models related to different issues.
Do you plan on doing something like a checklist, about which - at least - free model corespond well with your mod, making it at least functional as intended at least in some degree?
Its kinda mess and it seems like you have'nt got time to test it out well. Before more annoying people like me will turn vocal, could you point a default model you developed this mod for?
Do you plan on adding support to something beside OpenRouter?
LoneLemone  [author] 26 Jan @ 11:59am 
@Szczygiel Some language models are smarter than others. For example, I'm currently working on an "Overlord" expansion so AI can generate buildings and assign priorities to Pawns.

I've tried some of the other models, but Gemini has been the most intelligent.

Some models are great for roleplay, some are great for writing documents, and some are good at everything.

Because the models can change and I have no control over what models exist and what doesn't. It makes sense to keep all available models existing and allow the user to chose what fits their play style best.
Szczygiel 26 Jan @ 11:37am 
This is error for free mistral model. Looks like its still above limit of Mistrals 8k context. Which is weird

OpenRimWorldAI (AI Chat request failed: HTTP/1.1 402 Payment Required)
UnityEngine.StackTraceUtility:ExtractStackTrace ()
Verse.Log:Message (string)
OpenRimWorldAI.Log:Message (string)
OpenRimWorldAI.AI/<>c__DisplayClass13_1:<AIChatAsync>b__0 (UnityEngine.AsyncOperation)
UnityEngine.AsyncOperation:InvokeCompletionEvent ()

And its weird because OpenChat 3.5 7B (free)
Actually gives a response and has context of 8k same as Mistral. But there is a but.
What I recieve as a report is a whole damn wall of "%". Literally. Alert lshows up abeled as "[...]" and it consists of %%%%%%%%%%%%%%%%%%%%%%%%%%% and so on to the very end haha
Szczygiel 26 Jan @ 11:04am 
I mean there are other free models, capable of being not so censored, but some are described as lets say 2k context 2k max output. Others 8k context 4k output. What my error adressed is what limit? Context or output?
LoneLemone  [author] 26 Jan @ 10:48am 
@Szczygiel - Currently all my prompts are pre-coded. Additionally right now I'm lazily url encoding the prompts which is probably tripping the character count.

I believe the only special character I need to be concerned with are "" quotes. I'm doing some testing to make sure its stable, but next update should significantly reduce the characters OpenRimWorldAI is sending.
Szczygiel 26 Jan @ 10:48am 
what matters in case of that error before when comes down to a model? Context or max output? Where is bottleneck?
Szczygiel 26 Jan @ 10:12am 
totally not a problem and I have this in mind, I mean it is a kinda problem but you know. Looking forward for development, good job you devman.
You know its rimworld, I ask pawn about what to do with prisoner for bashing a newborn head with polehammer and I get some weird moralisation instead while pawn is a goddamn psycho.
First of all some expanding of description on explanation what how and why will do great job, people are not so keen on ai, apis, llms and rest of this. Would be cool to know what to expect and what are limitations instead thinking user is doing something wrong or exerienced a bug and trying to solve it alone, you know - time saver
LoneLemone  [author] 26 Jan @ 7:24am 
@Szczygiel My solution for this would be have the AI summarize the first 4k characters, then include the remaining characters for the final output, or I could just trim the extra characters. Either way there will be some data loss that'll affect the AI's responses.

Keep in mind this mod is still early development and it'll take some time and debugging to work out some of these kinks.

You're right to point out the character limitation being an issue for certain models.
Szczygiel 26 Jan @ 7:09am 
yeah but its listed in your mod yet its not working. Any others that will actually work, not sfw dum dums like gemini?
LoneLemone  [author] 26 Jan @ 6:14am 
@Szczygiel The model you're using has a character limit. Gemini has been the best when it comes to character limits.

All models are pulled from here: https://openrouter.ai/models
Szczygiel 26 Jan @ 3:26am 
this is the one upon loading a save. Not only one, but one

OpenRimWorldAI ({"error":{"message":"Provider returned error","code":400,"metadata":{"raw":{"__kind":"OK","data":"{\n \"id\": \"90801ea416b7eeb7\",\n \"error\": {\n \"message\": \"Input validation error: `inputs` tokens + `max_new_tokens` must be <= 4097. Given: 4566 `inputs` tokens and 1638 `max_new_tokens`\",\n \"type\": \"invalid_request_error\",\n \"param\": \"max_tokens\",\n \"code\": null\n }\n}"},"provider_name":"Together 2"}},"user_id":"user_2h5mI0R7yNXxo0Tg7MpyI3LykZS"}

)
UnityEngine.StackTraceUtility:ExtractStackTrace ()
Verse.Log:Message (string)
OpenRimWorldAI.Log:Message (string)
OpenRimWorldAI.AI/<>c__DisplayClass13_1:<AIChatAsync>b__0 (UnityEngine.AsyncOperation)
UnityEngine.AsyncOperation:InvokeCompletionEvent ()
Szczygiel 26 Jan @ 3:15am 
Cool thanks.
Saving works.
But now this model doesn't work.
Could you check if mytho free works for you, and if so, step by step guide? I think this is what missing the most in description, on how to do it.
What I did, is creating openrouter account. Then I found free mytho model, and created API key.
I pasted code into its place in mod option.
First option ticked positive
Second option ticked negative
Third option ticked positive, myhto is selected.

I get weird errors.
LoneLemone  [author] 26 Jan @ 12:59am 
@Szczygiel Sorry about that - had some left over variables hanging around. I just pushed an update out, should work correctly now.

Uncheck Shared Key and it'll use your key.
Szczygiel 26 Jan @ 12:50am 
Okay so I have openrouter account, I have my key and pasted it into settings, I selected mythomax and all options are ticked to enabled, This is it? Now it will use my key and use this model?
When trying to uncheck use shared key it won't save, when I open settings back its back to being checked positive. I don't know if I have to do it tho.


Well no, actually setting show daily report wont save too.
I do press save befroe closing ofc.
LoneLemone  [author] 25 Jan @ 6:41pm 
@Szczygiel the little android icon in the bottom right. I'll add a screenshot.
Szczygiel 25 Jan @ 4:12pm 
in description it says I can provide my own API. How do I do it?
LoneLemone  [author] 24 Jan @ 8:34pm 
@The Count99, you're right. It is also counting animals as well. I'm fixing it for the next update.
TurtleShroom 24 Jan @ 4:00pm 
What does this Mod do?
CyberNova 24 Jan @ 3:55pm 
It's often referencing pawns that don't exist for me
Franklin 24 Jan @ 1:44am 
@loneLemone I want to see my game being ruined by my own pawn ;)