\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

Why is AI so slow / why does it freeze so much?

...
.,.,.;;,;.,;:,:,,:,.,:,::,..;.,:,.:;.:.,;.:.,:.::,
  01/12/26
you're not paying $2k a month for it yet
screenman
  01/12/26
Each query has to kill an endangered species
fully online and responsive
  01/12/26
the apis are fine if you use those. if you are using the web...
chopped unc
  01/12/26
because there's a shortage of GPUs and RAM. Most OpenAI user...
https://i.imgur.com/ovcBe0z.png
  01/12/26
no. all of the "slow ai" issues are always slow st...
chopped unc
  01/12/26
makes sense
https://i.imgur.com/ovcBe0z.png
  01/12/26


Poast new message in this thread



Reply Favorite

Date: January 12th, 2026 5:47 PM
Author: .,.,.;;,;.,;:,:,,:,.,:,::,..;.,:,.:;.:.,;.:.,:.::,




(http://www.autoadmit.com/thread.php?thread_id=5821222&forum_id=2#49584226)



Reply Favorite

Date: January 12th, 2026 5:47 PM
Author: screenman

you're not paying $2k a month for it yet

(http://www.autoadmit.com/thread.php?thread_id=5821222&forum_id=2#49584229)



Reply Favorite

Date: January 12th, 2026 5:50 PM
Author: fully online and responsive

Each query has to kill an endangered species

(http://www.autoadmit.com/thread.php?thread_id=5821222&forum_id=2#49584234)



Reply Favorite

Date: January 12th, 2026 5:51 PM
Author: chopped unc

the apis are fine if you use those. if you are using the web app or desktop app there are a lot of things that can make it go slow. having a ttt computer being one of the main reasons. check your cpu usage when its streaming and if it spikes up to like 100% with a ton of threads then you just need to open a new session. the app keeps the entire conversation history in memory and on screen and as the session grows the interface has to maintain and update every previous messsage. guarantee you its all UI/render latency not model inference latency. for instance claude streams fast as fuck on amazon bedrock like almost instant.

(http://www.autoadmit.com/thread.php?thread_id=5821222&forum_id=2#49584242)



Reply Favorite

Date: January 12th, 2026 5:54 PM
Author: https://i.imgur.com/ovcBe0z.png (mail@baumeisterlaw.com)


because there's a shortage of GPUs and RAM. Most OpenAI users are only interacting with quantized models because why let proles use the full uncompressed model?

https://www.xoxohth.com/thread.php?thread_id=5820268&forum_id=2

(http://www.autoadmit.com/thread.php?thread_id=5821222&forum_id=2#49584250)



Reply Favorite

Date: January 12th, 2026 5:58 PM
Author: chopped unc

no. all of the "slow ai" issues are always slow streaming which is a ui rendering issue caused by not a powerful enough local cpu and buggy client optimization

(http://www.autoadmit.com/thread.php?thread_id=5821222&forum_id=2#49584255)



Reply Favorite

Date: January 12th, 2026 6:06 PM
Author: https://i.imgur.com/ovcBe0z.png (mail@baumeisterlaw.com)


makes sense

(http://www.autoadmit.com/thread.php?thread_id=5821222&forum_id=2#49584270)