Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
> OpenRouter (Recommended) vs Ollama (Not Recommended)
Aaand this line is the reason I will not use this mod, but rather implement my own.
The base mod here functions pretty well. I've been putting a lot of work into the "Overlord" expansion and being entertained with how chaotic the AI is. See: https://gtm.steamproxy.vip/sharedfiles/filedetails/?id=3420778796
With the base mod performing well with OpenRouter, I'm considering adding a few additional API Sources:
- Google (Gemini)
- OpenAI (GPT)
- Microsoft (Copilot)
Not sure if OpenAI/Microsoft offer a dev key with free usage, but I know Google does for Gemini.
OpenRouter will still remain the default due it's capability of supporting a broad range of models and easy pay-model for those that choose to use it.
Not sure this mod really warrants a discord, but if you want to reach me outside of Steam, here's my personal discord channel: discord (.) gg/BbxXCqsW
I can look into it. I'm only really working on this in the free-time I have which is usually during the weekends.
You can set the AI Debug Level to 1 in the OpenRimWorldAI menu. In the Debug window you can see the data that was sent to the AI, and the AI's responses.
And can you create a Discord server related to this mod?
- Ollama has been added. I've been testing it, but it's a bit harsh on my computer. At most I can run Phi4 which is about 14b.
I also added a new setting, "RimWorld Background" since not all models have RimWorld knowledge in the trained data. This front-loads "What is RimWorld" information for better results.
Some models don't like to roleplay well...
<defName>RimWorld_Background</defName> if you guys think there's more context that could help.
Good-luck with Ollama-- I still wouldn't recommended :)
@2922608187 - I'm actually going to look into this for the next update.
It is possible. I believe Ollama also has a local API that can be called, and I have experimented with this in the past. Not sure how stable it would be for the amount of data this mod is delivering.
Gemini is the default model.
I just updated the prompts to be more concise. I don't know if you've tried unchecking the "Use Free" models box to see how many other models there are, but I can't imagine myself testing every single one of those... maybe the free models sure, but.... yeah :D
Llama acts weird, tries to generate something with sense and roleplay but fails midway and breaks the structure of report. Mythmax also doesnt work.
So, as mod author, could you test these free models on your end on free api on how they act, what they do, what dont and why?
It seems there are at least few different errors on different models related to different issues.
Do you plan on doing something like a checklist, about which - at least - free model corespond well with your mod, making it at least functional as intended at least in some degree?
Its kinda mess and it seems like you have'nt got time to test it out well. Before more annoying people like me will turn vocal, could you point a default model you developed this mod for?
Do you plan on adding support to something beside OpenRouter?
I've tried some of the other models, but Gemini has been the most intelligent.
Some models are great for roleplay, some are great for writing documents, and some are good at everything.
Because the models can change and I have no control over what models exist and what doesn't. It makes sense to keep all available models existing and allow the user to chose what fits their play style best.
OpenRimWorldAI (AI Chat request failed: HTTP/1.1 402 Payment Required)
UnityEngine.StackTraceUtility:ExtractStackTrace ()
Verse.Log:Message (string)
OpenRimWorldAI.Log:Message (string)
OpenRimWorldAI.AI/<>c__DisplayClass13_1:<AIChatAsync>b__0 (UnityEngine.AsyncOperation)
UnityEngine.AsyncOperation:InvokeCompletionEvent ()
And its weird because OpenChat 3.5 7B (free)
Actually gives a response and has context of 8k same as Mistral. But there is a but.
What I recieve as a report is a whole damn wall of "%". Literally. Alert lshows up abeled as "[...]" and it consists of %%%%%%%%%%%%%%%%%%%%%%%%%%% and so on to the very end haha
I believe the only special character I need to be concerned with are "" quotes. I'm doing some testing to make sure its stable, but next update should significantly reduce the characters OpenRimWorldAI is sending.
You know its rimworld, I ask pawn about what to do with prisoner for bashing a newborn head with polehammer and I get some weird moralisation instead while pawn is a goddamn psycho.
First of all some expanding of description on explanation what how and why will do great job, people are not so keen on ai, apis, llms and rest of this. Would be cool to know what to expect and what are limitations instead thinking user is doing something wrong or exerienced a bug and trying to solve it alone, you know - time saver
Keep in mind this mod is still early development and it'll take some time and debugging to work out some of these kinks.
You're right to point out the character limitation being an issue for certain models.
All models are pulled from here: https://openrouter.ai/models
OpenRimWorldAI ({"error":{"message":"Provider returned error","code":400,"metadata":{"raw":{"__kind":"OK","data":"{\n \"id\": \"90801ea416b7eeb7\",\n \"error\": {\n \"message\": \"Input validation error: `inputs` tokens + `max_new_tokens` must be <= 4097. Given: 4566 `inputs` tokens and 1638 `max_new_tokens`\",\n \"type\": \"invalid_request_error\",\n \"param\": \"max_tokens\",\n \"code\": null\n }\n}"},"provider_name":"Together 2"}},"user_id":"user_2h5mI0R7yNXxo0Tg7MpyI3LykZS"}
)
UnityEngine.StackTraceUtility:ExtractStackTrace ()
Verse.Log:Message (string)
OpenRimWorldAI.Log:Message (string)
OpenRimWorldAI.AI/<>c__DisplayClass13_1:<AIChatAsync>b__0 (UnityEngine.AsyncOperation)
UnityEngine.AsyncOperation:InvokeCompletionEvent ()
Saving works.
But now this model doesn't work.
Could you check if mytho free works for you, and if so, step by step guide? I think this is what missing the most in description, on how to do it.
What I did, is creating openrouter account. Then I found free mytho model, and created API key.
I pasted code into its place in mod option.
First option ticked positive
Second option ticked negative
Third option ticked positive, myhto is selected.
I get weird errors.
Uncheck Shared Key and it'll use your key.
When trying to uncheck use shared key it won't save, when I open settings back its back to being checked positive. I don't know if I have to do it tho.
Well no, actually setting show daily report wont save too.
I do press save befroe closing ofc.