TikTok is allowing people to shut off its famous Algorithm


Zero Trust: A Framework for the Proposition of Artificial Intelligence to Address Discrimination, Bias, and Competition among Generative AI

Accountable Tech and its partners suggested bright-line rules, or policies that are clearly defined and leave no room for subjectivity, as lawmakers continue to meet with Artificial Intelligence companies.

The group sent the framework to politicians and government agencies mainly in the US this month, asking them to consider it while crafting new laws and regulations around AI.

Zero Trust is a framework that focuses on three principle: enforce existing laws, create bold, easily implemented rules and place the burden of proof on companies to prove that the systems are not harmful in each stage of the lifecycle. Its definition of AI encompasses both generative AI and the foundation models that enable it, along with algorithmic decision-making.

We wanted to put out the framework soon because the technology is evolving quickly, but new laws can’t move at that speed.

As the government continues to figure out how to regulate generative AI, the group said current laws around antidiscrimination, consumer protection, and competition help address present harms.

Discrimination and bias in AI is something researchers have warned about for years. A recent Rolling Stone article charted how well-known experts such as Timnit Gebru sounded the alarm on this issue for years only to be ignored by the companies that employed them.

AI Companies Must Prove Their AI is Safe, says Nonprofit Group [AI Companies Must prove Their AI Is Safe, Say Their Legislators]

“The idea behind Section 230 makes sense in broad strokes, but there is a difference between a bad review on Yelp because someone hates the restaurant and GPT making up defamatory things,” Lehrich says. (Section 230 was passed in part precisely to shield online services from liability over defamatory content, but there’s little established precedent for whether platforms like ChatGPT can be held liable for generating false and damaging statements.)

These include prohibiting AI use for emotion recognition, predictive policing, facial recognition used for mass surveillance in public places, social scoring, and fully automated hiring, firing, and HR management. They are asking to ban collecting excessive amounts of sensitive data for any given service and they are also asking to ban collection of biometrics in fields like education and hiring.

Accountable Tech also urged lawmakers to prevent large cloud providers from owning or having a beneficial interest in large commercial AI services to limit the impact of Big Tech companies in the AI ecosystem. Cloud providers such as Microsoft and Google have an outsize influence on generative AI. OpenAI, the most well-known generative AI developer, works with Microsoft, which also invested in the company. The big language model Bard has been released by Google and the company is developing other models for commercial use.

The group proposes a method similar to one used in the pharmaceutical industry, where companies submit to regulation even before deploying an AI model to the public and ongoing monitoring after commercial release.

The nonprofits don’t want to have a single government regulatory body. Splitting up rules may make the rules less strict, or make them harder to enforce, but this question has to be asked by lawmakers.

Lehrich says it’s understandable smaller companies might balk at the amount of regulation they seek, but he believes there is room to tailor policies to company sizes.

Source: AI companies must prove their AI is safe, says nonprofit group

The Battle for Your Brain: Defending the Right to Think Freely in the Age of Artificial Intelligence and the Digital Services Act (the WIRED Opinion)

He says that we need to be aware of the different stages of the supply chain and design requirements for each phase.

TikTok recently announced that its users in the European Union will soon be able to switch off its infamously engaging content-selection algorithm. The EU’s Digital Services Act is leading to the change, as part of its broader effort to regulate digital services in compliance with human rights and values.

The Battle for Your Brain is a book written by Nita Farahany. Defending the Right to Think Freely in the Age of Neurotechnology (St. Martin’s Press 2023) and Robinson O. Everett Professor of Law and Philosophy at Duke University.

Business practices and products that enhance cognitive liberty could be stimulated by tax incentives and funding. A top team of ethics researchers emphasize that an organization has to prioritize safety to counter the risks posed by large language models. The platforms accountability and transparency act has the ability to encourage this by offering tax breaks and funding opportunities to companies that collaborate with educational institutions to create Artificial Intelligence safety programs. Tax incentives  could also support research and innovation for tools and techniques that surface deception by AI models.

Technology companies should also adopt design principles embodying cognitive liberty. There are steps in the right direction with options like adjusting settings on TikTok. Other features that enable self-determination—including labeling content with “badges” that specify content as human- or machine-generated, or asking users to engage critically with an article before resharing it—should become the norm across digital platforms.

WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. You can read more opinions here. Send an idea to [email protected].