Last week the White House announced that it had convened the leading 7 AI companies to discuss how best to ensure the safe, secure, and transparent use of AI technology. The companies include Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. The White House also announced that the companies had agreed to eight “voluntary commitments,” included at the end of this post.
Perhaps unsurprisingly, the commitments focus on security testing, safeguard and reporting. Two others focus on research on the risks and potential benefits of AI. Of the two remaining, one involves developing a “watermarking” system to identify AI-generated content and the last commits the companies to “publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. This report will cover both security risks and societal risks, such as the effects on fairness and bias.”
What is surprising, however, are two glaring omissions from the set of commitments: ensuring model reliability and limiting client data use by the companies. For the first, the CFPB has expressed significant concern over the model’s accuracy and reliability, raising the potential danger this creates in the context of consumer uses, such as direct-to-consumer chatbots. While reliability is perhaps the hardest challenge for AI to tackle, there are some basic ways that these companies could work towards increased reliability. Currently, models like OpenAI often do not cite the sources behind a particular answer, making it difficult for the individual user to confirm accuracy and oversee the results. In fact, when asked questions related to, for example, what is permitted or disallowed by regulation, OpenAI will often give either a broad source (the CFPB for example) or will give an answer explaining that the model uses a variety of resources and cannot cite specific ones for that particular response. The inability to trace the source and double-check the result will make it very difficultc for many industries, such as financial services, to fully take advantage of these models without additional product overlays.
The other surprising omission from the set of commitments relates to the company’s own use of the data shared with their models. OpenAI, in particular, has faced significant regulatory scrutiny over their use of customer data and inputs to train the model. In fact, the FTC has launched an investigation into whether the company has "engaged in unfair or deceptive data security practices." Perhaps in response to these types of concerns, OpenAI has created options for users of some of its solutions to choose to limit the company’s use of the user data to train the model or develop new products, while still allowing it to use data for some internal monitoring purposes. These options do not currently apply to some of its more popular solutions, such as ChatGPT. Of course, given that data is the lifeblood of these models, it seems appropriate to create additional incentives for all AI companies to clearly disclose how user data is used and to permit clients to limit that use in informed, practical ways.
These issues, reliability and vendor data use, are two of the challenges that Vectari seeks to address through its compliant-by-design solutions. For example, to strengthen the reliability of the solutions, in our Policy GPT product, the user will have the option of receiving the specific citation and source for the answer given, allowing the user to double-check for themselves, and read further into the related policy if useful. Policy GPT will also be closely overseen by regulatory experts, to give the model ongoing feedback to ensure accuracy. These safeguards will make it more feasible for financial institutions to rely on the model to enhance the work of their human workforce, and to be able to explain to auditors and regulators how the answers are derived and used. To tackle the data use issue, Vecatri provides solutions that can be fully implemented within an FI’s internal environment, within firewalls, and eliminate the need to share data with the outside AI model vendor. This will allow FIs to rest assured that their data will not be used by the vendor for model training or other purposes that could inadvertently expose confidential information.
Of course, while these omissions to the “voluntary commitments” show the continued need for companies to implement additional safeguards in using these models, the White House’s efforts to partner with industry to tackle some of the inherent risks in their use is a promising sign that, rather than trying to fight the growth of this technology, the government is willing to take a collaborative approach to help realize its potential safely.
---
From: FACT SHEET: Biden-HarrisAdministration Secures Voluntary Commitments from Leading ArtificialIntelligence Companies to Manage the Risks Posed by AI, June 21, 2023, available at www.whitehouse.gov/briefing-room/statements-releases/.
Ensuring Products are Safe Before Introducing Them to the Public
Building Systems that Put Security First
Earning the Public’s Trust