Most Democrats and Republicans agree that the federal government should better regulate the biggest technology companies, particularly social media platforms. But there is very little consensus on how it should be done.
Should TikTok be banned? Should younger children be kept off social media? Can the government make sure private information is secure? What about brand new artificial intelligence interfaces? Or should users be regulating themselves, leaving the government out of it?
Tech regulation is gathering momentum on Capitol Hill as concerns skyrocket about China’s ownership of TikTok and as parents navigating a post-pandemic mental health crisis have grown increasingly worried about what their children are seeing online.
Lawmakers have introduced a slew of bipartisan bills, boosting hopes of compromise.
But any effort to regulate the mammoth industry would face major obstacles as technology companies have fought interference. Noting that many young people are struggling, President Joe Biden said in his February State of the Union speech that “it’s time” to pass bipartisan legislation to impose stricter limits on the collection of personal data and ban targeted advertising to children.
“We must finally hold social media companies accountable for the experiment they are running on our children for profit,” Biden said.
Tech companies have aggressively fought any federal interference, and they have operated for decades now without strict federal oversight, making any new rules or guidelines that much more complicated.
A look at some of the areas of potential regulation:
Several House and Senate bills would try to make social media, and the internet in general, safer for children who will inevitably be online. Lawmakers cite numerous examples of teenagers who have taken their own lives after cyberbullying or died engaging in dangerous behavior encouraged on social media.
In the Senate, at least two competing bills are focused on children’s online safety. Legislation by Sens. Richard Blumenthal, D-Conn., and Marsha Blackburn, R-Tenn., approved by the Senate Commerce Committee last year would require social media companies to be more transparent about their operations and enable child safety settings by default. Minors would have the option to disable addictive product features and algorithms that push certain content.
The idea, the senators say, is that platforms should be “safe by design.” The legislation, which Blumenthal and Blackburn reintroduced last week, would also obligate social media companies to prevent certain dangers to minors — including promotion of suicide, disordered eating, substance abuse, sexual exploitation and other illegal behaviors.
A second bill introduced last month by four senators — Democratic Sens. Brian Schatz of Hawaii and Chris Murphy of Connecticut and Republican Sens. Tom Cotton of Arkansas and Katie Britt of Alabama — would take a more aggressive approach, prohibiting children under the age of 13 from using social media platforms and requiring parental consent for teenagers. It would also prohibit the companies from recommending content through algorithms for users under the age of 18.
Senate Majority Leader Chuck Schumer, D-N.Y., has not weighed in on specific legislation but told reporters last week, “I believe we need some kind of child protections” on the internet.
Critics of the bills, including some civil rights groups and advocacy groups aligned with tech companies, say the proposals could threaten teens’ online privacy and prevent them from accessing content that could help them, such as resources for those considering suicide or grappling with their sexual and gender identity.
“Lawmakers should focus on educating and empowering families to control their online experience,” said Carl Szabo of NetChoice, a group aligned with Meta, TikTok, Google and Amazon, among other companies.
Biden’s State of the Union remarks appeared to be a nod toward legislation by Sens. Ed Markey, D-Mass., and Bill Cassidy, R-La., that would expand child privacy protections online, prohibiting companies from collecting personal data from younger teenagers and banning targeted advertising to children and teens. The bill, also reintroduced last week, would create a so-called “eraser button” allowing parents and kids to eliminate personal data, when possible.
A broader House effort would attempt to give adults as well as children more control over their data with what lawmakers call a “national privacy standard.” Legislation that passed the House Energy and Commerce Committee with wide bipartisan support last year would try to minimize data collected and make it illegal to target ads to children, usurping state laws that have tried to put privacy restrictions in place. But the bill, which would have also given consumers more rights to file lawsuits over privacy violations, never reached the House floor.
Prospects for the House legislation are unclear now that Republicans have the majority. House Energy and Commerce Chairwoman Cathy McMorris Rodgers, R-Wash.., has made the issue a priority, holding several hearings on data privacy. But the committee has not yet moved forward with a new bill.
Lawmakers introduced a raft of bills to either ban TikTok or make it easier to ban it after a combative March House hearing in which lawmakers from both parties grilled TikTok CEO Shou Zi Chew over his company’s ties to China’s communist government, data security and harmful content on the app.
Chew attempted to assure lawmakers that the hugely popular video-sharing app prioritizes user safety and should not be banned due to its Chinese connections. But the testimony gave new momentum to the efforts.
Soon after the hearing, Missouri Sen. Josh Hawley, a Republican, tried to force a Senate vote on legislation that would ban TikTok from operating in the United States.
But he was blocked by a fellow Republican, Kentucky Sen. Rand Paul, who said that a ban would violate the Constitution and anger the millions of voters who use the app.
Another bill sponsored by Republican Sen. Marco Rubio of Florida would, like Hawley’s bill, ban U.S. economic transactions with TikTok, but it would also create a new framework for the executive branch to block any foreign apps deemed hostile. His bill is cosponsored by Reps. Raja Krishnamoorthi, D-Ill., and Mike Gallagher, R-Wis.
There is broad Senate support for bipartisan legislation sponsored by Senate Intelligence Committee Chairman Mark Warner, D-Va., and South Dakota Sen. John Thune, the No. 2 Senate Republican, that does not specifically call out TikTok but would give the Commerce Department power to review and potentially restrict foreign threats to technology platforms.
The White House has signaled it would back that bill, but it is unclear if it will be brought up in the Senate or if it could garner support among House Republicans.
TikTok has launched an extensive lobbying campaign for its survival, including by harnessing influencers and young voters to argue that the app isn’t harmful.
A newer question for Congress is whether lawmakers should move to regulate artificial intelligence as rapidly developing and potentially revolutionary products like AI chatbot ChatGPT begin to enter the marketplace and can in many ways mimic human behavior.
Senate leader Schumer has made the emerging technology a priority, arguing that the United States needs to stay ahead of China and other countries that are eyeing regulations on AI products. He has been working with AI experts and has released a general framework of what regulation could look like, including increased disclosure of the people and data involved in developing the technology, more transparency and explanation for how the bots arrive at responses.
Schumer has said that any eventual regulation should “prevent potentially catastrophic damage to our country while simultaneously making sure the U.S. advances and leads in this transformative technology.”
The White House has been focused on the issue as well, with a recent announcement of a $140 million investment to establish seven new AI research institutes. Vice President Kamala Harris met Thursday with the heads of Google, Microsoft and other companies developing AI products.
By Mary Clare Jalonick / AP