Now we look at Colorado:
‘Lap of luxury:’ Section 8 covers Colorado rents up to $3,879 a month
https://www.thecentersquare.com/colorado/article_d4f06c07-7306-4ee3-b100-e4064fb2b78b.html
Excerpt:
.....In Colorado, the HCV program covers rents up to $3,879 per month for four-bedroom homes in the Colorado Springs ZIP codes of 80118, 80914, 80924, and 80927.
Of the 43 available four or more bedroom homes listed for rent in these ZIP codes, all but three were below the $3,879 limit.
In 80924, which includes Wolf Ranch, there are 28 homes with four or more bedrooms for rent, ranging from $2,099 per month to $4,250 per month, all but three of which are below the $3,879 per month limit. The median rent is $3,250 per month. One $3,250 example is a five bedroom, four bathroom, 3,790 square foot home including a home theater, bar, a large fenced-in yard, and three-car garage.
If a family with the average HCV household income — estimated by HUD to be $18,558 per year, or $1,546.5 per month, including other welfare payments — were to rent this home, the household’s out of pocket cost for the home is $463.95 per month. This would leave taxpayers on the hook for the other $2,786.05 per month in perpetuity, or until the admitted individual exits or is removed from the program.
According to Sepp, keeping out-of-pocket costs fixed, while allowing for portability encourages households to seek out the most expensive home they can secure, instead of trying to save taxpayers money by choosing a home they could more easily afford on their own some day.
“By fixing the out of pocket exposure, the program is defeating one of its own purposes of encouraging responsibility in housing — if you’re going to pay the same amount of money, why bother with getting somewhere that costs less?” continued Sepp.
Should a household start to make more money than the area’s maximum Section 8 income limit — which for a five-member household in Colorado Springs is $60,750 per year — the family would be forced off the program. At $60,750 per year, a household that does not want to be rent-burdened — and thus spend no more 30% of its income on rent — could only afford rent of $1518.75 per month. That is significantly less than the up to $3,879 of taxpayer-funded value provided by Section 8.
As a result, earning more money could cost Section 8 recipients their housing. To not be rent-burdened while paying $3,250 per month on rent, a household would need to make $130,000 per year, or more than double the income threshold at which a family would be removed from Section 8.
“It makes no sense,” continued Sepp. “There has to be a comprehensive, data-driven adjustment to all of these benefits.”
HUD did not respond to requests for comment.
************************
Need to either eliminate HUD or totally revamp the financial assistance HUD gives. I’m for eliminating the agency. HUD is another agency Johnson signed into law on September 9, 1965.
I should move to Colorado and become a ski bum. I just won’t mention how I exercise my Second Amendment rights.
NVIDIA Goes All Adam Smith - Small Language Models Are Best
Cognitive first principles do not change - focus on a problem to solve it
https://fractalcomputing.substack.com/p/nvidia-goes-all-adam-smith-small
***********************************
NVIDIA Goes All Adam Smith - Small Language Models Are Best
This week the ongoing battle between the massive centralized data center team and the distributed, compute where the data is team - took an historic turn.
NVIDIA stepped up - going all in on classical economist Adam Smith - specialization is better than dreaming - get to work building Small Language Models.
Most A.I. agents do not need Large Language Models - says NVIDIA.
A.I. agent problems are specific - not general, and specific problems have very constrained domains.
That is why they are called agents - not philosophers.
Of course, agents are specific - because nobody on the planet needs an energy consuming data center - ripping up Virginia farmland, to write haikus or decide the purpose of mankind.
When your governor told you the future of A.I. is large data centers he did not tell you big data centers are needed for Large Language Models - which are needed for UNCONSTRAINED analysis - when every critical A.I agent problem is CONSTRAINED.
..... Adam Smith published The Wealth of Nations.
Smith made simple, yet stunning - proven - hypotheses:
Dividing labor into simple, repeatable tasks is a “division of labor, and it is the foundational principle of classical economics.
We are learning it may also be a first principle in human and machine decisioning.
NVIDIA observed that A.I. agents are specific, employ constrained models addressing identified problems.
A.I. agents are a better fit for small language models - which are kind of like dividing up the A.I. labor.
Division of labor demands applying constraints to a task.
Adam Smith used the example of a pin manufacturer.
If one guy makes the entire pin (today’s LLM), he can make a fraction of pins made by a team where everyone performs a SPECIFIC TASK.
Divide the problem into its individual components - division of labor - everything changes.
· Workers become more skilled at each task.
· Workers work faster, producing more pins, because they do not waste time moving around among tasks.
· Specialization creates innovation - innovation creates new tools and methods - and more pins are manufactured.
NVIDIA made almost the identical observations with Small Language Models (SLM) versus Large Language Models.
Since the Fractal team is steeped in economics as well as computer science, we relished the article. HERE
(https://medium.com/data-science-in-your-pocket/nvidia-small-llms-are-the-future-a01a7f602b48)
.....The fundamental difference between LLMs and SLMs is the SLM is task oriented - the LLM is a model of about everything.
When one looks at the A.I. problems companies and the government deal with today, few, if any need an LLM.
.....Let’s take an example - one everyone can understand.
Your spouse wants BRAVO on the streaming channel.
BRAVO is available via HULU - and one or two less well-known streamers - and HULU is $104 a month.
Most of the HULU stuff is Disney and nobody in your house watches cartoons, so why are you paying $104 for HULU, when you only want BRAVO?
That is a constrained A.I. problem.
That is a very tangible problem for a frustrated streaming watcher, and they want an answer.
Someone, probably a HULU competitor builds an SLM to deal with this modern-day irritation and it gives the answer one could not find spending hours on the web site of 5 streaming services.
Over time, more people ask this question.
Maybe one guy wants BRAVO and ESPN but not the History Channel.
Here we are dealing with the constrained problems of life - don’t deny it, this is what each of us cares about.
When you go to the A.I. system - and it is an LLM, if it tells you it has no idea how you can get BRAVO cheaper, but you shouldn’t be watching BRAVO anyway you should be learning a new language - you will scream into the phone.
As the NVIDIA piece notes “…..you do not need open-domain brilliance, you need an answer that matters.”
A.I. agents provide answers that matter - now.
A.I. agent models are there to solve very specific problems or questions.
When the system can identify from a caller’s voice - their native language - which is not English, and route them instantly to a native speaker - that solves a problem and ONLY that one problem.
NVIDIA and Adam Smith agree on a whole lot more.
.....When you are solving a small problem, like figuring out how much you need to cough up to the Federal government for that 401(k) after age 65 Schwab or Fidelity can give you a quick A.I. agent answer.
Those rules change constantly - and using small language models, which aggregate - the domain experts can focus on only the model for the 401(k), quickly adapt its rules which then tie to the other small models.
Adam Smith called it dexterity - the A.I. guys from NVIDIA point out the “unused parameters” in a large model - which Schwab pays for yet are of zero value to the customer’s problem.
.....LLMs are energy black holes - causing the madness of big data centers each of which consumes half a million gallons of water a day. If LLMs are not needed for agentic A.I. why the data center madness?
.....One of the benefits of the SLM approach for agentic A.I., according to the NVIDIA team - and we agree - is the small language model can be fine-tuned from the data.
NVIDIA explained it as small models, working together - generating training data making the system tighter, cheaper and using less energy.
Adam Smith called this innovation - but specialization pretty much always yields this benefit, let’s look.
A small language model operates against a domain - a data model of all the stuff relevant to its world. Each operation yields answers - and those answers are unlikely to be perfect Day 1.
So the application builders “tune” the small language model - to make it reactive to its own data.
If an agent is providing an instruction that is inaccurate, it’s possible to find why, make the change and use the model’s predictions, measured against results to modify the model.
It is improbable to impossible to do this with a large language model which is, by definition, all things to all people.
.....NVIDIA makes another critical observation - one easily overlooked.
Small language models make computing where the data resides more feasible. A short sentence opening an entire world of technological disruption.
Computing where the data resides, like on a backpack on a battlefield avoids sending data to Amazon and waiting for a customer service rep to respond – it is becoming a thing.
While edge computing never took off, because edge companies think the edge is a data center on every corner - drone warfare makes the military types understand fast the days of centralized compute are over.
NVIDIA makes the compelling case SLMs can bring agentic A.I. to where the problem may be, and there is unlikely to be a data center on the corner.
So why are so many people all crazy fired up on LLMs?
.....he tech market is “addicted to centralized” compute, with all data in one great big data center, with APIs among the components. That’s in the article, not Fractal but we have been saying that for a long time.
How did the tech market get here?
A.I. showed up in the last 24 months - to the masses - and immediately became the must have for everyone.
Almost-half-a-century-old software companies like Oracle, with old, I/O wait state intensive technology, and others like Palantir - had to go A.I. or go home.
When you carry a multi-billion-dollar market cap, nobody’s going home so obsolete technology companies went all in on A.I. - via marketing, not with actual new, nimble A.I. technology.
They told the world it needed massive data centers to run large language models. Of course they did, Oracle and Palantir need a data center, A.I. does not.
The LLM vs. SLM battle started with obsolete, high I/O latent wait state software companies struggling to stay relevant - claiming A.I. needed data centers the size of Manhattan.
Like all technology cycles, over time, as more problems are addressed by a new tech like A.I. more insightful heads bring insight.
NVIDIA this week opened this door a little wider that agentic A.I. - which is most of A.I. does not need large language models.
Fractal proves every day any existing data center can be more productive by a factor of 1,000 without the commensurate energy usage - using I/O wait state-reducing technology.