Dude, it's a very small dataset focused on ONE set of facts, designed for specific queries.
If the commie chinese taught us anything with DeepSeek its this:
LLM's aren't the way. Focused, well trained, smaller, distributed, FOCUSED AI engines are far faster, will scale out better, be more efficient, and deliver better, more consistent results.
The above statement comes with a large number of assumptions, the biggest being this: The smaller, more focused and trained AI becomes, the more dependent on UNBIASED information they are to be properly trained and provide reliable results.
This is where DeepSeek FAILED: heavily biased algorithms with government approved data to train them. (Knew THIS point specifically as soon as I started reading up on DeepSeek and how the Chinese did it.)
BTW, I've told you this before: It's relatively easy to run a small AI engine at home with a proper graphics card (I believe I recommended one to you when you were building your PC). The instructions on running a small Docker packaged AI engine are out there. I do it on my Ubuntu Linux server @ home.
What functions do graphics cards serve to aid algorithms in conjunction with AI?
Please excuse my question, am dumbfounded and dumbstruck in regards to knowledge of subject matter ….. AI