

Discover more from STAY Sustainable Blog
Will the EU’s AI and Data Privacy Laws Foreshadow the US?
Rules for AI are popping up around the world as regulators try to catch up…
ChatGPT has captured the public with over 100 million accounts in a few short months. Since then, the FTC has decided to sue OpenAI, claiming consumers have been harmed, among other things. Big Tech subsequently got together at the White House recently with President Joe Biden and announced some “voluntary” risk management measures. While I myself am not particularly concerned about AI taking over the world before the next US Presidential election, there are certainly other substantial risks associated with it, some of which I discussed in my previous article. This week, I will be outlining the “voluntary” US AI risk management Framework from the National Institute of Standards and Technology [1]. While perhaps not the last word, it provides a good start on risk management of AI systems, from the simple to the complex.
Executive Summary
The EU’s DataServices Act (DSA) is in force and impacts the largest global tech companies.
The new EU AI law is expected to have a broad impact and is expected later this year. It bans certain unacceptable-risk activities and puts others under supervision.
Independent, third-party audits will be required for all companies subject to the DSA, and probably some AI activities.
The US may follow the EU’s leads with similar laws, but for now most of the action in the US is voluntary.
You can see a summary of voluntary guidelines for AI in the US in the following article:
The mission of this blog is to get critical information into the minds of sustainability professionals, executives, business owners, and you—to focus our efforts on the 20% of ideas that will solve 80% of the problem. If, in some small way, as a community, we can help save the planet, then our time together was spent well.
The New EU Digital Services Act Aims for Data Privacy
On August 25, 2023, the EU’s new Digital Services Act (DSA) took effect. The EU is hoping to improve online safety by making the large platforms legally responsible for policing their content. Until now, only the poster was responsible. This legal shift will likely be expensive for the platforms affected, both from a compliance staffing and legal standpoint, including potential lawsuits and fines. [1]
Before we cover the details, it is important to recognize that the DSA currently only applies to some of the largest digital platforms, so hopefully it does not burden smaller companies or startups. The 20 entities covered under the DSA based on reaching more than 45 million monthly active users are:
Alibaba AliExpress
Amazon Store
Apple AppStore
Bing
Booking.com
Facebook
Google Search
Google Maps
Google Play (app store)
Google Shopping
Instagram
LinkedIn
Meta
Pinterest
Snapchat
TikTok
Wikipedia
X (formerly known as Twitter)
YouTube
Zalando
The DSA requirements can be summarized as follows:
More User Empowerment:
Users will receive transparent explanations for content recommendations and can opt out of profiling-based recommendations.
Reporting mechanisms will be streamlined for users to flag illegal content, requiring platforms to act quickly and responsibly.
Platforms must avoid using sensitive data, such as ethnicity or political views, for displaying advertisements and must clearly label all ads along with their promoters.
Strong Protection of Minors:
Platforms must redesign their systems to ensure the privacy, security, and safety of minors, prioritizing these aspects in their design.
Targeted advertising based on profiling of children will be banned, limiting how platforms can monetize minor engagement.
Special risk assessments are mandated, which must be submitted to the Commission and made public, to identify and mitigate risks such as negative mental health effects on minors.
More Diligent and Rules-Based Content Moderation, Less Disinformation:
Platforms are responsible for taking active steps to mitigate the spread of illegal content and must protect freedom of expression and information.
Platforms are expected to have and enforce clear terms and conditions in a non-arbitrary manner.
User mechanisms to report illegal content must be provided, and platforms must act upon these reports swiftly, implementing measures to address the spread of disinformation.
More Transparency and Accountability:
Platforms must undergo external, independent audits to confirm compliance with all Digital Services Act (DSA) obligations.
Public data access will be granted to researchers, and a special mechanism will be established for vetted researchers to access more sensitive data.
Transparency reports must be published on ad content and content moderation decisions, in addition to repositories of all ads served, to enhance public scrutiny.
Expectations for The Upcoming AI Act in the EU
The new AI Act categorizes AI into different risk categories based upon what their purpose is and bans some of those activities. Expectations are that it gets released later in 2023. [2]
Unacceptable Risk:
AI systems deemed a threat to people or vulnerable groups, including children, will be banned, such as those involving cognitive behavioral manipulation or social scoring.
Systems for real-time and remote biometric identification, like facial recognition, are considered unacceptable risks.
Some exceptions may be allowed, such as delayed remote biometric identification for prosecuting serious crimes, but only after court approval.
High Risk:
High-risk AI systems are categorized into those under the EU’s product safety legislation (toys, aviation, cars, medical devices, elevators) and those in eight other specific areas including: biometric identification, critical infrastructure management, education, employment, essential services, law enforcement, migration and asylum management, and legal interpretation.
These systems will be assessed both before entering the market and throughout their lifecycle to ensure they do not compromise safety or fundamental rights.
High-risk systems must be assessed before release, registered, and continuously evaluated to ensure compliance with regulations.
Generative AI:
Generative AI models like ChatGPT must disclose that the content they produce is AI-generated.
These AI models are required to be designed in such a way that they do not produce illegal content.
Summaries of copyrighted data used in training these models must be publicly disclosed.
Limited Risk:
Limited risk AI systems must meet minimal transparency requirements so users can make informed decisions about interacting with them.
Users should be made aware when they are interacting with an AI system, such as a deepfake generator or manipulator.
These systems are typically concerned with generating or manipulating image, audio, or video content.
The aim is to allow users to decide whether they want to continue interacting with these AI systems after initial engagement.
One observation is that the rules for generative AI currently only include avoiding illegal content, but one could imagine clear examples where questions and answers could fall into the high-risk area.
What Can We Expect in the US?
If the past is any guide, we will not see AI legislation in the US until there is some kind of emergency. Nevertheless, we could see other regulatory enforcement (and possibly new rules) sooner than that, perhaps from the FTC. An example of this was the joint statement made on April 25, 2023, by FTC Chair Lina M. Khan, along with officials from the Civil Rights Division of the U.S. Department of Justice, the Consumer Financial Protection Bureau, and the U.S. Equal Employment Opportunity Commission, summarized as follows:
The goal is to uphold fairness, equality, and justice as AI and automated systems become increasingly integrated into daily life.
The agencies have expressed prior concerns about the potentially harmful uses of AI and automated systems, particularly in areas like civil rights, consumer protection, and equal opportunity.
There is no "AI exemption" to existing laws and agencies will use their full legal authority to combat fraud, discrimination, and unfair practices enabled by these technologies. AI will not serve as a cover for illegal activities.
“We already see how AI tools can turbocharge fraud and automate discrimination, and we won’t hesitate to use the full scope of our legal authorities to protect Americans from these threats. Technological advances can deliver critical innovation—but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.” - FTC Chair Lina M. Khan, April 25, 2003.
Observations
While the US is clearly the regulatory laggard regarding AI and technology, that may not be the case forever or in every area. The NIST AI voluntary guidelines are a good example and may de facto shift into law at some point. Stay tuned. And engaged.
News of the Week
Impact of AI on Elections [Economist]
Robots Are Pouring Drinks in Las Vegas [Apple News]
How Worried should we be about AI's threat to humanity?
Sustainability takes a back seat in autos [Reuters]
Sources
[1] European Commission, Digital Services Act: Commission designates first set of Very Large Platforms and Search Engines. April, 2023. Downloaded from the web 9/3/23. https://ec.europa.eu/commission/presscorner/detail/en/IP_23_2413
[2] European Parliament, EU AI Act: first regulation on artificial intelligence. June 14, 2023. Downloaded from the web 9/3/23. https://ec.europa.eu/commission/presscorner/detail/en/ip_23_2413
[3] Federal Trade Commission, FTC Chair Khan and Officials from DOJ, CFPB and EEOC Release Joint Statement on AI. April 25, 2023. Downloaded from the web 9/3/23. https://www.ftc.gov/news-events/news/press-releases/2023/04/ftc-chair-khan-officials-doj-cfpb-eeoc-release-joint-statement-ai
Copyright © 2023 by Alec Crawford
#ai #airisk #chatgpt #innovation #STAYblog