What Happened This Week

On Thursday, April 10, Maine’s legislature sent LD 2082 to the governor’s desk. The bill prohibits the clinical use of artificial intelligence in mental health therapy. It still allows AI to handle scheduling, billing, and other administrative tasks, but the moment a chatbot starts acting as a therapist, that’s a violation. The legislature is scheduled to adjourn April 15, so lawmakers pushed this through with urgency.

Missouri is moving in the same direction. HB 2372, an omnibus healthcare bill that includes a therapy chatbot ban, passed the full House on April 2 and now sits with the Senate Committee on Families, Seniors and Health. This one goes further than Maine’s bill: it covers therapy services, psychotherapy services, and mental health diagnosis. First violation carries a $10,000 penalty, enforced by the state Attorney General.

These two states did not appear out of nowhere. Oregon and Idaho both signed chatbot-related bills into law during the first week of April. Tennessee governor Bill Lee signed an AI healthcare bill around the same time. The Tennessee legislature also passed a separate personhood bill, SB 837, defining that artificial intelligence cannot be considered a “person,” “life,” or “natural person” under state law. The Senate vote was 26-6. The House vote was 93-2. That kind of bipartisan margin is unusual for anything tech-related.

The Market That Triggered the Crackdown

None of this legislation exists in a vacuum. The mental health chatbot market is projected to hit $2.03 billion by 2033, growing at a 6.2% annual rate, with North America holding 42% of the global share. Companies like Woebot Health, Wysa, and Youper have built real businesses around AI-driven therapy tools. Wysa even received FDA Breakthrough Device Designation, which is normally reserved for medical devices that address unmet needs.

The problem is scale without oversight. These products started reaching vulnerable people, people in active mental health crises, while regulatory frameworks stayed silent. Several chatbot products were deployed in clinical or quasi-clinical settings without the same licensing and oversight requirements that apply to human therapists. That gap between product availability and regulatory guardrails is exactly what these bills are trying to close.

Why States Are Moving Faster Than Washington

The federal government is simultaneously accelerating AI adoption across agencies and fighting legal battles over where AI authority begins and ends. Meanwhile, states have a much narrower question to answer: should a chatbot be allowed to treat a person experiencing a mental health crisis?

The answer emerging from state legislatures is a clear no. Georgia’s SB 540, a chatbot disclosure and child safety bill, awaits the governor’s signature after the session ended last Monday. Nebraska attached an AI chatbot safety measure, LB 1185, to an agricultural data privacy bill, of all things, and the combined package has been placed on final reading before the legislature adjourns April 17.

Illinois has so many AI bills floating around that the Senate essentially formed a committee just to sort through them all. Alabama’s session wraps April 16, and AI measures are still in play there too.

The Line Between Clinical and Administrative

Here is the part worth paying attention to. Both the Maine and Missouri bills draw the same distinction: AI is fine for administrative healthcare tasks. AI is banned from clinical therapeutic work. This is not a blanket anti-AI stance. It is a specific boundary.

That boundary matters because it sets a template other states can copy. Instead of trying to regulate all AI in healthcare, which would be messy and probably unworkable, legislators are targeting a narrow, emotionally resonant category: a machine talking a suicidal teenager through their feelings. Hard to argue against regulating that.

But the line between “administrative” and “clinical” gets blurry fast. Does triaging a patient’s symptoms count as diagnosis? Does suggesting coping strategies count as therapy? Does a chatbot that asks “how are you feeling today?” cross into clinical territory? These are the questions that will end up in court.

What Comes Next

By the end of April, five or six states could have AI therapy bans on their books. That creates a patchwork of state regulations that chatbot companies will have to navigate, and it pressures Congress to act on something, anything, related to AI oversight.

The Tennessee personhood bill is the sleeper here. Defining AI as legally distinct from human beings sounds obvious, but getting that definition codified into statute with 93-2 and 26-6 margins gives other states a precedent. Expect to see similar personhood bills spread quickly.

For the companies building these products, the window for unregulated deployment just closed. Woebot, Wysa, and Youper now face a real compliance burden that did not exist six months ago. Companies that built growth strategies on direct-to-consumer mental health tools without clinical licensing are going to need new strategies. The $2 billion market projection did not account for state-level bans. It should have.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.