2026 Is Coming For Big Tech And AI With A Battery Of New Laws

Several state AI and tech rules took effect on Jan. 1, with more coming as the new year progresses.
In this photo illustration, an Artificial Intelligence logo is displayed on a smartphone with a US Flag in the background. (Photo Illustration by Omar Marques/SOPA Images/LightRocket via Getty Images)
In this photo illustration, an Artificial Intelligence logo is displayed on a smartphone with a US Flag in the background. (Photo Illustration by Omar Marques/SOPA Images/LightRocket via Getty Images)
Profile Image
Yuvraj Malik·Stocktwits
Published Jan 02, 2026   |   4:09 AM EST
Share
·
Add us onAdd us on Google
  • A series of new laws targeting AI, data privacy, and harmful content takes effect this year, increasing compliance for tech companies.
  • The rollout may prove difficult, as the federal government views the growing patchwork of state AI rules as a drag on innovation and has moved to rein them in.
  • Some of the rules could face challenges in court.

Tech companies face a crowded regulatory calendar, with a wave of new rules set to take effect in 2026. A mix of state and central laws – spanning data privacy, AI services, and children’s use of digital platforms – will come into force, tightening oversight and increasing compliance.

The regulatory landscape could be slightly chaotic, though: Last month, President Donald Trump signed an executive order, giving federal agencies a framework to sue states, among other remedies, for state AI laws the center determines are onerous and stifle innovation.

Here’s a look at some of the new regulations with significant implications:

AI Safety

In 2026, a wide range of California laws regulating the development, marketing, and use of AI go into effect. While some are set to be implemented in the next few years, a few notable ones took effect on Jan. 1.

Among them are rules requiring major AI companies to publish safety and security details; prohibiting AI developers from asserting that the AI autonomously caused harm as a legal defense in civil lawsuits; prohibiting AI systems from misrepresenting themselves as licensed medical professionals; and laws against deepfake pornography.

Later in the year, Colorado would implement an AI regulation - directly named in Trump’s criticism of state AI laws - requiring AI companies to disclose information about high-risk systems and put in place measures to prevent algorithmic discrimination.

Data Privacy

Indiana is implementing a law that establishes a framework for users to obtain, correct, and delete the personal information companies hold about them. Similarly, Rhode Island has a new law that requires disclosure of how personal information is collected and sold.

Maine is implementing a law that requires, among other things, companies to disclose subscription terms and to offer a simple cancellation method. A similar regulation by the Federal Trade Commission did not pass an appeals court test last year.

Children Safety

Nebraska is implementing a law to limit the excessive use of social media and apps by children. It mandates restrictions on app features such as notifications, in-game purchases, and infinite scrolling for children, and addresses “dark patterns” that keep kids online. A similar law in California was blocked last year.

On similar lines, Virginia has implemented a rule requiring social media companies to verify users’ ages and limit younger teens to one hour of use per app per day.

Later in the year, a children’s data privacy rule in Arkansas would go into effect, barring online services from collecting unnecessary personal data from minors.

Harmful Content

Oregon is implementing several rules, including provisions to ban AI-generated sexual imagery, to limit data collection for ad targeting of users under 16, and to prohibit software designed to facilitate ticket-scalping bots.

Texas is enacting an AI regulatory framework that prohibits using the technology to incite harm, capture biometric identifiers, or discriminate based on characteristics such as race, gender, or political viewpoint.

In the months ahead, two significant pieces of legislation will take effect: New York’s scaled-back RAISE Act, which mandates safety plans and incident reporting for frontier AI models, and the federal Take It Down Act, which criminalizes AI-generated nonconsensual intimate imagery and sets removal rules for social platforms.

For updates and corrections, email newsroom[at]stocktwits[dot]com.

Share
·
Add us onAdd us on Google
Read about our editorial guidelines and ethics policy