Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Calls to the API fail with:
Invalid JSON payload received. Unknown name "typical_p": Cannot find field.
This causes the framework to receive an error array instead of a JSON object, resulting in a parse failure (Failed to parse standard JSON response: Error reading JObject from JsonReader. Current JsonReader item is not an object: StartArray).
Steps to reproduce:
Enable RimAI.Framework.
Trigger any AI chat response.
Observe log error.
Expected behavior:
Valid JSON request payload should be sent without unsupported fields.
Actual behavior:
Request includes "typical_p", which is not accepted by the target API, breaking all responses.
Notes:
"typical_p" is not a valid field for the current API for Gemini Flash at least. Removing it should resolve the issue.
You might be right. I'll test this issue later and, if there are any problems, I'll try to fix them as soon as possible. Thank you for the feedback.
Hello,
The issue seems to be with the message "Success! Response: Unkn...". Although OpenAI returned a 200 success code, the rest of the message should say "Unknown something...". This indicates that something in your request is unknown to OpenAI. It could be the URL or the API key. If you're certain that your key is correct, the problem is most likely with the URL.
I haven't been able to test the OpenAI content myself yet since I'm not in a region where their services are available. However, I will try to find a working OpenAI service to test it with as soon as possible.
After a month of intensive development and testing, LM Studio is now built into the selection menu. You can test it out through the new and complete RimAI Core (BETA) project.
I don't have a Mac environment, so I haven't been able to test it directly myself, but I believe it should work without major issues. Please feel free to use it, and if you encounter any problems, I'll prioritize a fix within 24 hours.
Incorporating KoboldCpp directly into this mod would introduce a lot of engineering challenges.
Using llama.cpp directly also presents many installation difficulties for users.
Our current solution is to recommend using Ollama to load GGUF models, as it's already natively supported by this mod.
I've noted your request to add Mistral and Cloudflare to the service selection. I'll check their APIs first. If they support the OpenAI standard, I can add them directly. If not, it might take a bit longer as I'll need to adapt the integration.
In any case, thanks for your interest, and I hope you continue to enjoy using the mod!
Thanks for the reply, the only reason I was asking, there is another mod called EchoColony. I did a basic guide on how to setup LM Studio.
Of course, but a more general template is under development to make this framework compatible with Ollama, vLLM, SGLang, and ML Studio. This is because some specific smaller models have unique parameters, such as Qwen3 with its "thought switch." Passing this parameter is quite tricky, and I need to account for it as well.
Can you use LM Studio with this mod?
https://gist.github.com/HugsLibRecordKeeper/b246824cfda8a1727b6820ffca6fc400
Exception filling window for LudeonTK.EditWindow_Log: System.InvalidOperationException: Collection was modified; enumeration operation may not execute.
[Ref 4A33AD84]
SOMETHING must have failed then, obviously. :D
Wonder how long it takes for the 'AI' to realize that ** also doesn't make bold text on Steam WS pages. ;)