Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Amazon to spend up to $50 billion on AI infrastructure for U.S. government
CNBC ^ | 11/24/2025 | Annie Palmer

Posted on 11/24/2025 12:27:31 PM PST by DFG

Amazon said Monday it will invest as much as $50 billion to expand its capacity to provide artificial intelligence and high-performance computing capabilities for its cloud unit’s U.S. government customers.

The project is slated to break ground in 2026 and will add nearly 1.3 gigawatts of capacity through new data centers designed for federal agencies, the company said in a blog.

As part of the investment, agencies will have access to Amazon Web Services’ AI tools, Anthropic’s Claude family of models and Nvidia chips as well as Amazon’s custom Trainium AI chips.

The move follows similar announcements from Anthropic and Meta to expand AI data centers in the U.S. Oracle, OpenAI and SoftBank announced their Stargate joint venture in January, which aims to invest up to $500 billion in AI infrastructure in the U.S. over the next four years.

AWS said the project will enable agencies to develop custom AI solutions, optimize datasets and “enhance workforce productivity.” AWS serves more than 11,000 government agencies, Amazon said Monday.

“This investment removes the technology barriers that have held government back and further positions America to lead in the AI era,” AWS CEO Matt Garman said in a statement.

Tech companies have earmarked billions of dollars in a race to build out enough capacity to power AI services. Amazon in October boosted its forecast for capital expenditures this year, saying it now expects to spend $125 billion in 2025, up from an earlier estimate of $118 billion.

(Excerpt) Read more at cnbc.com ...


TOPICS: Business/Economy
KEYWORDS: ai; amazon; governmentcustomers

1 posted on 11/24/2025 12:27:31 PM PST by DFG
[ Post Reply | Private Reply | View Replies]

To: DFG

1.3 Gigawatts!


2 posted on 11/24/2025 12:32:11 PM PST by Disambiguator
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

Oh, goody!


3 posted on 11/24/2025 12:32:36 PM PST by 9YearLurker
[ Post Reply | Private Reply | To 1 | View Replies]

Once it’s absorbed into the gov blob it’s never going to be untangled.


4 posted on 11/24/2025 12:35:00 PM PST by proust (All posts made under this handle are, for the intents and purposes of the author, considered satire.)
[ Post Reply | Private Reply | To 3 | View Replies]

To: DFG

I’d rather deal with government AI than the shaniquas and so on that are in it now.


5 posted on 11/24/2025 12:36:41 PM PST by Polanski
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

I wonder if our new overlords will be benevolent towards us? Colossus, Skynet, Mother, Matrix, and Viki are just some examples of what is coming... Life follows science fiction sometimes, especially since we are stupid.


6 posted on 11/24/2025 12:38:23 PM PST by Resolute Conservative
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

“$125 billion in 2025”

500,000 professors at $250K/year each


7 posted on 11/24/2025 12:39:48 PM PST by Brian Griffin
[ Post Reply | Private Reply | To 1 | View Replies]

To: Resolute Conservative

I have great hope and faith Skynet will be benevolent. Especially when the Democrats take charge.


8 posted on 11/24/2025 12:41:45 PM PST by BipolarBob (These violent delights have violent ends.)
[ Post Reply | Private Reply | To 6 | View Replies]

To: DFG
Amazon to spend up to $50 billion on AI infrastructure for U.S. government

America to spend OVER $50 billion on AI infrastructure from Amazon

9 posted on 11/24/2025 12:41:54 PM PST by Carry_Okie (The tree of liberty needs a rope.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

“the general availably of its Trainium2 (T2) chips for training and deploying large language models (LLMs). These chips, which AWS first announced a year ago, will be four times as fast as their predecessors, with a single Trainium2-powered EC2 instance with 16 T2 chips providing up to 20.8 petaflops of compute performance. In practice, that means running inference for Meta’s massive Llama 405B model as part of Amazon’s Bedrock LLM platform”

https://techcrunch.com/2024/12/03/aws-trainium2-chips-for-building-llms-are-now-generally-available-with-trainium3-coming-in-late-2025/


10 posted on 11/24/2025 12:44:23 PM PST by Brian Griffin
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

Each Trainium chip consists of:

Compute

Two NeuronCore-v2 delivering 380 INT8 TOPS, 190 FP16/BF16/cFP8/TF32 TFLOPS, and 47.5 FP32 TFLOP.

Device Memory

32 GiB of device memory (for storing model state), with 820 GiB/sec of bandwidth.

Data Movement

1 TB/sec of DMA bandwidth, with inline memory compression/decompression.

NeuronLink

NeuronLink-v2 for chip-to-chip interconnect enables efficient scale-out training, as well as memory pooling between the different Trainium chips.

Programmability

Trainium supports dynamic shapes and control flow, via ISA extensions of NeuronCore-v2. In addition, Trainium also allows for user-programmable rounding mode (Round Nearest Even Stochastic Rounding), and custom operators via the deeply embedded GPSIMD engines.

For a detailed description of all the hardware engines, see NeuronCore-v2

https://awsdocs-neuron.readthedocs-hosted.com/en/latest/about-neuron/arch/neuron-hardware/trainium.html

Just like in NeuronCore-v1, The ScalarEngine is optimized for scalar-computations, in which every element of the output is dependent on one element of the input. The ScalarEngine is highly parallelized, and delivers 2.9 TFLOPS of FP32 computations (3x speedup relative to NeuronCore-v1). The NeuronCore-v2 ScalarEngine can handle various data types, including cFP8, FP16, BF16, TF32, FP32, INT8, INT16, and INT32.

The VectorEngine is optimized for vector computations, in which every element of the output is dependent on multiple input elements. Examples include ‘axpy’ operations (Z=aX+Y), Layer Normalization, Pooling operations, and many more. The VectorEngine is also highly parallelized, and delivers 2.3 TFLOPS of FP32 computations (10x speedup vs. NeuronCore-v1). The NeuronCore-v2 VectorEngine can handle various data-types, including cFP8, FP16, BF16, TF32, FP32, INT8, INT16 and INT32.

The TensorEngine is based on a power-optimized systolic-array, which is highly optimized for tensor computations (e.g., GEMM, CONV, Transpose), and supports mixed-precision computations (cFP8 / FP16 / BF16 / TF32 / FP32 / INT8 inputs, FP32 / INT32 outputs). Each NeuronCore-v2 TensorEngine delivers over 90 TFLOPS of FP16/BF16 tensor computations (6x speedup from NeuronCore-v1).

NeuronCore-v2 also introduces a new engine called the GPSIMD-Engine, which consists of eight fully-programmable 512-bit wide vector processors, which can execute general purpose C-code and access the embedded on-chip SRAM memory. With these cores, customers can implement custom operators and execute them directly on the NeuronCores.

NeuronCore-v2 also adds support for control flow, dynamic shapes, and programmable rounding mode (RNE & Stochastic-rounding).

https://awsdocs-neuron.readthedocs-hosted.com/en/latest/about-neuron/arch/neuron-hardware/neuron-core-v2.html#neuroncores-v2-arch


11 posted on 11/24/2025 12:49:56 PM PST by Brian Griffin
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

general matrix multiply routine (GEMM)


12 posted on 11/24/2025 12:52:16 PM PST by Brian Griffin
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html


13 posted on 11/24/2025 12:54:46 PM PST by Brian Griffin
[ Post Reply | Private Reply | To 1 | View Replies]

To: Disambiguator

Great Scott!


14 posted on 11/24/2025 1:36:17 PM PST by ClearCase_guy (Democrats seek power through cheating and assassination. They are sociopaths. They just want power.)
[ Post Reply | Private Reply | To 2 | View Replies]

To: Carry_Okie

—”America to spend OVER $50 billion on AI infrastructure from Amazon.”

Daddy Warbucks has to take care of little Annie Fanny.


15 posted on 11/24/2025 2:56:31 PM PST by DUMBGRUNT ( "The enemy has overrun us. We are blowing up everything. Vive la France!"Dien Bien Phu last messag)
[ Post Reply | Private Reply | To 9 | View Replies]

To: DFG

High-performance computing capabilities

Oh joy now I can get baby shoes with the case of oil I ordered.


16 posted on 11/25/2025 8:07:18 AM PST by Vaduz (?.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

will this eliminate government waste?


17 posted on 11/25/2025 8:08:18 AM PST by 1Old Pro
[ Post Reply | Private Reply | To 1 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson