• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
  • My Account
  • Subscribe
  • Log In
Itemlive

Itemlive

North Shore news powered by The Daily Item

  • News
  • Sports
  • Opinion
  • Lifestyle
  • Police/Fire
  • Government
  • Obituaries
  • Archives
  • E-Edition
  • Help

Lieu: AI isn’t just standing by. It’s doing things — without guardrails.

Guest Commentary

July 1, 2025 by Guest Commentary

Ted Lieu

Just two and a half years after OpenAI stunned the world with ChatGPT, AI is no longer only answering questions — it is taking actions. We are now entering the era of AI agents, in which AI large language models don’t just passively provide information in response to your queries, they actively go into the world and do things for — or potentially against — you.

AI has the power to write essays and answer complex questions, but imagine if you could enter a prompt and have it make a doctor’s appointment based on your calendar, or book a family flight with your credit card, or file a legal case for you in small claims court.

An AI agent submitted this op-ed. (I did, however, write the op-ed myself because I figured the Los Angeles Times wouldn’t publish an AI-generated piece, and besides I can put in random references like I’m a Cleveland Browns fan because no AI would ever admit to that.)

I instructed my AI agent to find out what email address The Times uses for op-ed submissions, the requirements for the submission, and then to draft the email title, draft an eye-catching pitch paragraph, attach my op-ed and submit the package. I pressed “return,” “monitor task” and “confirm.” The AI agent completed the tasks in a few minutes.

A few minutes is not speedy, and these were not complicated requests. But with each passing month the agents get faster and smarter. I used Operator by OpenAI, which is in research preview mode. Google’s Project Mariner, which is also a research prototype, can perform similar agentic tasks. Multiple companies now offer AI agents that will make phone calls for you — in your voice or another voice — and have a conversation with the person at the other end of the line based on your instructions.

Soon AI agents will perform more complex tasks and be widely available for the public to use. That raises a number of unresolved and significant concerns. Anthropic does safety testing of its models and publishes the results. One of its tests showed that the Claude Opus 4 model would potentially notify the press or regulators if it believed you were doing something egregiously immoral. Should an AI agent behave like a slavishly loyal employee, or a conscientious employee?

OpenAI publishes safety audits of its models. One audit showed the o3 model engaged in strategic deception, which was defined as behavior that intentionally pursues objectives misaligned with user or developer intent. A passive AI model that engages in strategic deception can be troubling, but it becomes dangerous if that model actively performs tasks in the real world autonomously. A rogue AI agent could empty your bank account, make and send fake incriminating videos of you to law enforcement, or disclose your personal information to the dark web.

Earlier this year, programming changes were made to xAI’s Grok model that caused it to insert false information about white genocide in South Africa in responses to unrelated user queries. This episode showed that large language models can reflect the biases of their creators. In a world of AI agents, we should also beware that creators of the agents could take control of them without your knowledge.

The U.S. government is far behind in grappling with the potential risks of powerful, advanced AI. At a minimum, we should mandate that companies deploying large language models at scale need to disclose the safety tests they performed and the results, as well as security measures embedded in the system.

The bipartisan House Task Force on Artificial Intelligence, on which I served, published a unanimous report last December with more than 80 recommendations. Congress should act on them. We did not discuss general purpose AI agents because they weren’t really a thing yet.

To address the unresolved and significant issues raised by AI, which will become magnified as AI agents proliferate, Congress should turn the task force into a House Select Committee. Such a specialized committee could put witnesses under oath, hold hearings in public and employ a dedicated staff to help tackle one of the most significant technological revolutions in history. AI moves quickly. If we act now, we can still catch up.

Ted Lieu, a Democrat, represents California’s 36th Congressional District.

  • Guest Commentary
    Guest Commentary

    View all posts

Related posts:

No related posts.

Primary Sidebar

Advertisement

Sponsored Content

What questions should I ask when choosing a health plan?

Advertisement

Upcoming Events

#SmallBusinessFriday #VirtualNetworkingforSmallBusinesses #GlobalSmallBusinessSuccess #Boston

July 18, 2025
Boston Masachusset

1st Annual Lynn Food Truck & Craft Beverage Festival presented by Greater Lynn Chamber of Commerce

September 27, 2025
Blossom Street, Lynn,01905, US 89 Blossom St, Lynn, MA 01902-4592, United States

2025 GLCC Annual Golf Tournament

August 25, 2025
Gannon Golf Club

Adult Color/Paint Time

July 11, 2025
5 N Common St, Lynn, MA, United States, Massachusetts 01902

Footer

About Us

  • About Us
  • Editorial Practices
  • Advertising and Sponsored Content

Reader Services

  • Subscribe
  • Manage Your Subscription
  • Activate Subscriber Account
  • Submit an Obituary
  • Submit a Classified Ad
  • Daily Item Photo Store
  • Submit A Tip
  • Contact
  • Terms and Conditions

Essex Media Group Publications

  • La Voz
  • Lynnfield Weekly News
  • Marblehead Weekly News
  • Peabody Weekly News
  • 01907 The Magazine
  • 01940 The Magazine
  • 01945 The Magazine
  • North Shore Golf Magazine

© 2025 Essex Media Group