
BSA, a tech advocacy group partially supported by Microsoft, released a document on Monday advocating for regulations that govern the utilization of artificial intelligence in national privacy legislation.
BSA is a tech advocacy group that represents major software firms including Adobe, IBM, and Oracle. While Microsoft is a prominent player in AI technology following its investment in OpenAI, which developed the AI-powered chatbot ChatGPT, Google, another key player in advanced AI, is not a member of BSA.
Many members of Congress, including Senate Majority Leader Chuck Schumer, D-N.Y., have emphasized the importance of regulating AI technology in a timely manner as it continues to develop rapidly. This has prompted BSA, a tech advocacy group supported by Microsoft among others, to advocate for AI regulations in national privacy legislation, as revealed in a document released on Monday.
The BSA is pushing for the implementation of four critical safeguards:
- Congress should specify the conditions under which companies are required to assess the designs or effects of AI.
- These requirements should apply when AI is utilized to make “significant decisions,” which Congress should also define.
- Congress should assign an established federal agency to review company certifications of compliance with the regulations.
- Companies should be required to create risk-management plans for high-risk AI.
Craig Albright, BSA’s Vice President of U.S. Government Relations, stated that the group is urging Congress to pass this legislation, as they represent an industry group. He further added that they are trying to raise more awareness about this opportunity as they feel it has not received enough attention as it should have.
Albright stated that the proposal is not intended to provide a solution for every issue related to AI, but rather to address an essential question about AI that Congress can effectively tackle.
The development of user-friendly advanced AI technologies, such as ChatGPT, has expedited the call for regulations on the technology. Although the U.S. has established a non-mandatory risk management framework, several proponents have urged for stronger safeguards. Meanwhile, Europe is in the process of finalizing its AI Act, which implements protections for high-risk AI.
Albright pointed out that as Europe and China continue to implement regulations to promote and regulate emerging technologies, American policymakers should consider if digital transformation is a crucial component of their economic agenda.
Albright suggested that if digital transformation is considered an essential part of the economic agenda, there should be a national agenda for it, which includes regulations for AI, national privacy standards, and strong cybersecurity policies.
According to the suggestions outlined by BSA to Congress, which were shared with CNBC, the group recommends that the American Data Privacy and Protection Act, a bipartisan privacy bill that passed out of the House Energy and Commerce Committee last Congress, be used as a basis for new AI regulations. Although the bill still has a long way to go to become law, BSA believes it already has the appropriate framework for the kind of national AI safeguards the government should implement.
BSA is hoping that when the ADPPA is reintroduced, it will include new provisions to regulate AI. Albright stated that the group has been communicating with the House Energy and Commerce Committee regarding their recommendations and that the committee has been receptive to feedback from various stakeholders. A spokesperson for the House E&C did not provide an immediate response for comment.
Albright acknowledged that passing any piece of legislation is challenging, including ADPPA. However, he emphasized that the proposed AI regulations are achievable and bipartisan, and can be included in any legislation that Congress creates. “Our hope is that however, they’re going to legislate, this will be a part of it,” Albright said.