RimWorld

RimWorld

RimTalk
"Request failed: 0 - Cannot connect to destination host" - but I can see the logged API response.
Hello!

I'm a GOG user of your mod via RimSort, I've been looking forward to trying it.

I have this set up locally with Ollama, on a 1b model for now (will upgrade later), and set to load toward the very end of my modlist.
This is a completely new save, albeit with my Rimworld 1.5 characters imported via Character Editor.

I have updated the mod today as well as I saw there had been bug fixes.

I've verified that I can reach the base URL in browser, and prior to the update this week where the API response is changed to no longer be logged, I could even see the response from the LLM in player.log, however, I cannot generate dialogue ingame. (Perhaps it would be helpful to be able to enable/disable debug logging?)

I see three RimTalk exceptions which are presumably generated each time RimTalk polls Ollama's API for a response from the model:
- Request failed: 0 - Cannot connect to destination host
- Exception in API request: Exception of type 'System.Exception' was thrown.
- Exception filling window for LudeonTK.EditWindow_Log: System.InvalidOperationException: Collection was modified; enumeration operation may not execute

Since it happens every few seconds, it's easy to reproduce on game load. Here's a brief (but full) log:
https://gist.github.com/HugsLibRecordKeeper/5de3f162a5b2687e7fefedad9497fad1

I've already tried removing the mod at the top of a previous stack trace (Useful Marks) but the error still occurs... thought I might ask about it rather than stabbing in the dark. Do you have any ideas as to why this could be occurring - eg. mod conflicts?

I know there are other exceptions in the log, I'm not sure they're related.
< >
Showing 1-3 of 3 comments
Juicy  [developer] 16 Sep @ 5:38pm 
Thanks for the detailed report.

Could you try updating to the latest version of the mod? The it should support debugging logs once you enable the flag.

To do this, turn on Dev Mode, then go to Options -> Dev -> Verbose Logging.

With that enabled, you should see proper request and response information in the console, which will make it much easier to diagnose issues.

That said, I highly suspect the problem is related to the model size. Using a very small model (like 1B) can cause the LLM to produce malformed JSON or structure that don’t match what Rimtalk expects. I recommend using at least a 4B model with a maximum temperature of 0.7. This usually ensures the model follows instructions well enough to generate a valid response.

Let me know if the issue persists after these changes.
Last edited by Juicy; 16 Sep @ 5:39pm
Thanks for the explanation - I can see why that'd cause issues. I'll try something a little more ambitious and report back.
Last edited by suspicious vase; 17 Sep @ 7:40pm
Hi - after a few play sessions with a 4b model, the issue has not returned. Appreciate the advice and resolution!
< >
Showing 1-3 of 3 comments
Per page: 1530 50