VPet-Simulator

VPet-Simulator

[v1.10+] Pet Speeking: EdgeTTS
Shetani 16 Aug, 2023 @ 1:25pm
any way to chat with it without chatgpt?
Is there a way to AI chat with it without chatgpt, I don't have the money for it lmao
< >
Showing 1-10 of 10 comments
Dycool51 18 Aug, 2023 @ 1:01pm 
true I was hoping we could use maybe character.ai characters or something like that
the free chat install says server overlaod please try again later lmao :V
insomnyawolf 22 Aug, 2023 @ 8:08am 
Maybe we could create a plugin/mod that lets us use llama.cpp or exllama locally, do you think it would be worth to investigate it further?
Shetani 6 Sep, 2023 @ 4:54am 
Originally posted by insomnyawolf:
Maybe we could create a plugin/mod that lets us use llama.cpp or exllama locally, do you think it would be worth to investigate it further?
idk what that is, but if it works then yes ofc
ninefid 16 Nov, 2023 @ 10:55am 
Originally posted by insomnyawolf:
Maybe we could create a plugin/mod that lets us use llama.cpp or exllama locally, do you think it would be worth to investigate it further?
LLaMa cpp python bindings already have server available through openai's module. Perhaps we can replace openai's with local
Drudge 17 Jan, 2024 @ 1:11am 
Originally posted by den0620:
Originally posted by insomnyawolf:
Maybe we could create a plugin/mod that lets us use llama.cpp or exllama locally, do you think it would be worth to investigate it further?
LLaMa cpp python bindings already have server available through openai's module. Perhaps we can replace openai's with local

I beg of you
insomnyawolf 19 Jan, 2024 @ 6:41am 
It advanced a lot lately we could try nowadays, even ooba's uo has an "open ai" compatible api even with streaming.

And models/libraries have advanced a lot as well.

Exllama2 is nuts for speed, if you can load the whole model in VRAM it can answer in less than 5 seconds (but requieres modern nvidia gpus)

Llamacpp is a bit slower but can work anywhere (even without gpu but it will be slow) and use some system memory if vram is not enough.

Models deppends on what are you searching for, i have found out that nowadays there are models that can have a nice conversation with the user but they still narrate things and break sometimes.

If anyone of you is still really interested, send me a message anywhere and we can try to make it real
Shetani 19 Jan, 2024 @ 11:03am 
Originally posted by insomnyawolf:
It advanced a lot lately we could try nowadays, even ooba's uo has an "open ai" compatible api even with streaming.

And models/libraries have advanced a lot as well.

Exllama2 is nuts for speed, if you can load the whole model in VRAM it can answer in less than 5 seconds (but requieres modern nvidia gpus)

Llamacpp is a bit slower but can work anywhere (even without gpu but it will be slow) and use some system memory if vram is not enough.

Models deppends on what are you searching for, i have found out that nowadays there are models that can have a nice conversation with the user but they still narrate things and break sometimes.

If anyone of you is still really interested, send me a message anywhere and we can try to make it real
if you actually can that would be sick lol
ninefid 21 Jan, 2024 @ 8:05am 
Originally posted by insomnyawolf:
It advanced a lot lately we could try nowadays, even ooba's uo has an "open ai" compatible api even with streaming.

And models/libraries have advanced a lot as well.

Exllama2 is nuts for speed, if you can load the whole model in VRAM it can answer in less than 5 seconds (but requieres modern nvidia gpus)

Llamacpp is a bit slower but can work anywhere (even without gpu but it will be slow) and use some system memory if vram is not enough.

Models deppends on what are you searching for, i have found out that nowadays there are models that can have a nice conversation with the user but they still narrate things and break sometimes.

If anyone of you is still really interested, send me a message anywhere and we can try to make it real

Does ExLLaMa have a server? I tried to move from llama.cpp to exl as soon as got powerful enough gpu but couldnt find it
v1ckxy 9 Apr, 2024 @ 9:28am 
Connect the chatgpt mod to text-generation-webui:

https://github.com/oobabooga/text-generation-webui/wiki/12-%E2%80%90-OpenAI-API

ZERO cost. You can run one instance almost everywhere.
Last edited by v1ckxy; 12 Apr, 2024 @ 1:11am
< >
Showing 1-10 of 10 comments
Per page: 1530 50