Trump’s AI Blueprint: Energy‑Intensive Centers, Anti‑Woke Tech, and Dominating the AI Frontier

Understanding the US AI Action Plan
Key Objectives
- Advancing Responsible AI Research – Allocating resources to develop robust and safe AI systems.
- Fortifying Cybersecurity – Implementing safeguards against AI‑driven attacks on critical infrastructure.
- Enhancing Workforce Readiness – Launching training programs to equip the workforce with AI literacy.
- Fostering Global Cooperation – Engaging internationally to shape consistent ethical AI standards.
Implementation Roadmap
- Establish a dedicated AI oversight agency to steer national policy.
- Define comprehensive ethical guidelines for AI deployment across sectors.
- Channel funding into AI safety research laboratories.
- Build public‑private collaborations that drive job creation in AI fields.
Why It Matters
The Action Plan is designed to keep the United States at the forefront of AI innovation while ensuring that technological progress is aligned with societal values and security needs.
Trump Outlines Major AI Policy Moves
Key Points from the President’s Announcement
- Exclusion of “woke AI” models from federal use.
- Transformation of the United States into a leading exporter of AI technology.
- Relaxation of environmental safeguards for AI development.
Context: Executive Orders and the AI Action Plan
On Wednesday, the President signed three executive orders focused on artificial intelligence, marking a significant step in the administration’s AI action plan.
These orders signal a strategic shift in the nation’s approach to AI technologies.
1. No Woke AI
U.S. Government Order Targets “Woke AI” in Federal Contracts
The recently issued directive, titled “Preventing Woke AI in the Federal Government,” seeks to exclude models deemed politically charged from federal procurement. It expressly bans any artificial‑intelligence system that is not considered “ideologically neutral.” The order argues that diversity, equity, and inclusion (DEI) initiatives represent a “pervasive and destructive” ideology capable of distorting both quality and accuracy in outputs.
Key Points of the Order
- Excludes AI systems that incorporate content related to race, gender, transgender identity, unconscious bias, intersectionality, or systemic racism.
- Claims that such topics hamper objective decision‑making in AI.
- Asserts that protecting free speech and “American values” requires removing DEI, climate data and misinformation from AI tools.
However, by eliminating these topics, the policy may unintentionally push AI toward a greater bias, making true objectivity harder to achieve.
Backlash from AI Critics
- David Sacks, former PayPal chief and now Trump’s chief AI advisor, has long opposed “woke AI.” His stance spurred a controversy following Google’s February 2024 launch of an AI image generator that rendered a Black, Asian, and Native American version of George Washington.
- After quick correction, the “Black George Washington” incident became a case study highlighting concerns over AI’s political leanings.
- The episode was amplified by prominent figures such as Elon Musk, Marc Andreessen, Vice President JD Vance, and a coalition of Republican lawmakers.
Political Implications
With the new order, the federal government aims to safeguard what it views as American values. Yet skeptics warn that the exclusion of DEI, climate, and misinformation could paradoxically deepen biases, limiting the effectiveness of AI in public policy.
2. Global dominance, cutting regulations
AI Strategy: Pushing Innovation While Maintaining Ideological Balance
Key Objectives
- Advance AI research and deployment across private sectors and public institutions.
- Eliminate obstacles that hinder rapid integration of artificial intelligence technology.
- Commit to “whatever it takes” to position the nation as a global AI leader.
In addition to accelerating progress, the strategy aims to shape the industry’s direction by addressing a persistent concern among its most fervent supporters.
Addressing Perceived Bias in AI Systems
Many tech advocates argue that widely used chatbots—such as OpenAI’s ChatGPT and Google’s Gemini—exhibit “liberal” leanings. The plan seeks to:
- Encourage the creation of AI models that reflect a broader spectrum of viewpoints.
- Promote transparency in the training data and decision-making processes of these systems.
- Foster competitive alternatives that offer balanced discourse and diverse perspectives.
By tackling both innovation and ideological concerns, the policy underscores the administration’s dual focus on technological leadership and the perceived values embedded within AI.
3. Streamlining AI data centre permits and less environmental regulation
Trump’s AI‑Centred Expansion Plan: Fast‑Tracking Construction, Relaxing Restrictions
At a recent event, President Donald Trump outlined a bold strategy to accelerate the development of new data centres and manufacturing plants. Key to the proposal is a push to streamline permitting and reduce environmental controls, aiming to speed up construction and meet the growing energy demands of artificial intelligence systems.
Major Points of the Initiative
- Eliminate “radical climate dogma” and lift limitations on clean‑air and clean‑water legislation.
- Expand power capacity to match China’s levels, with incentives for companies to build their own power plants.
- Encourage a single federal standard that overrides state regulations, reducing the risk of multi‑state legal battles.
Current Momentum in the Tech Sector
Tech giants such as OpenAI, Amazon, Microsoft, Meta, and xAI are actively pursuing new facilities across the United States and worldwide. OpenAI recently activated the first phase of a large data centre complex in Abilene, Texas, part of the Oracle‑backed Stargate project championed by Trump earlier this year.
Environmental Concerns and Regulatory Pushback
While the industry seeks less restrictive permitting to connect to the power grid, the rise in AI operations has increased fossil‑fuel usage, contributing to global warming. In response, UN Secretary‑General António Guterres has urged global tech firms to power all data centres with renewable energy by 2030.
Federal Funding and State Regulation
Trump’s plan includes a strategy to discourage states from imposing stringent AI‑related regulations, advocating that federal agencies should withhold funds from states that adopt overly burdensome rules. “We need one common-sense federal standard that supersedes all states, supersedes everybody,” Trump emphasized, noting that it would avoid litigating with 43 states simultaneously.
What Could Be Expected
- Rapid rollout of new AI infrastructure in the U.S.
- Potential increase in domestic energy production and higher consumption of fossil fuels.
- Calls for a unified national approach to AI governance and environmental protection.
Call for a People’s AI Action Plan
Leading VC Thinkers Clash Over AI Governance
In a flare-up of opinion on the industry’s top podcast platform, the All-In show has become a battleground where high‑profile venture capitalists lay out competing strategies for regulating artificial intelligence.
Accelerationist vs. Techno‑Realist Views
- Marc Andreessen and his allies espouse an “accelerationist” stance, pushing for rapid AI development with minimal oversight. Their belief is that the tech ecosystem can hand‑off governance to market forces and self‑regulation.
- In contrast, Ethan Sacks champions a techno‑realist approach*—advocating balanced policies that accept inevitable progress while applying pragmatic safeguards. He cautioned that halting AI would be futile: “If we don’t intervene, someone else will,” he said on the podcast.
Collective Opposition to Trump‑Led AI Initiative
On Tuesday, over one hundred organisations—including labour unions, parent advocacy groups, environmental justice coalitions, and privacy defenders—united under a resolution. The declaration rejects the executive’s push for industry‑driven AI policy and demands a “People’s AI Action Plan” focused primarily on American citizens.
Key Themes in the Resolution
- Inclusive policymaking that engages a broad spectrum of stakeholders.
- Clear safeguards against potential hazards such as bioweapons, cyber‑terrorism, and erratic algorithmic behaviour.
- Commitments to workforce protection and prevention of monopolistic power consolidation.
Perspective from the Future of Life Institute
Anthony Aguirre, executive director of the non‑profit Future of Life Institute, spoke with Euronews Next evaluating the administration’s proposals. He acknowledged the recognition of “critical risks” but stressed urgency for stronger protections.
“Relying merely on voluntary safety pledges from leading AI companies leaves the country vulnerable to serious mishaps, job displacement, and a concentration of power that erodes human control,” Aguirre noted.
“Experience demonstrates that corporate promises alone are insufficient.”
Call to Action
- Propose comprehensive regulatory frameworks rather than loose industry agreements.
- Ensure transparent monitoring and accountability across AI development pipelines.
- Prioritize human welfare and national security in all AI‑related policies.