The White House Meets with 7 Big Tech Companies – Releases Commitments on Managing AI

The White House Meets with 7 Big Tech Companies – Releases Commitments on Managing AI
image

The White House has just announced that it accepted pledges from a number of high-profile tech companies for the safe development of AI. The Fact Sheet for today’s meeting can be found here. Seven companies — Google, Microsoft, Meta, Amazon, OpenAI, Anthropic and Inflection — convened at the White House today to announce the voluntary agreements. Here are President Biden’s comments after the meeting:

[embedded content]

Here are a couple of commentaries we received from our friends in the big data ecosystem:

Commentary by Anita Schjøll Abildgaard, CEO and co-founder of Iris.ai

With all emerging technology, the establishment of clear legal frameworks is necessary to ensure the technology is used safely and fairly. The requirement of developers to publish the authors of material used in chatbot training is an important measure for making sure that authors are accredited, and the protections against overt surveillance have obvious benefits.

More regulation is coming, and the development of AI models has a role to play. Transparency into how the models work will be crucial in promoting trust and accountability, while making sure regulation is being adhered to. Another key aspect is explainability. AI systems that provide understandable explanations for their decisions will not only enhance transparency but also help to combat the biases in some models and prevent discriminatory practices from taking place.

It is important to recognize that AI governance is a complex and evolving field. The genie is already out of the bottle, and while regulators catch up with this hugely powerful technology, organizations developing AI can help to make sure its potential is harnessed for the benefit of everyone.”

Commentary by Aaron Mendes, CEO of PrivacyHawk 

“It’s nice to see big tech pledging to be responsible with AI. This move by the White House primarily helps with misinformation. Now we need more commitments to help protect consumers from the dangers of AI, particularly how their privacy can be violated, and personal data can be used for scams, fraud, and other cybercrimes. Even if they do some work on protecting consumers, it’s still important for individuals to reduce their digital footprint before it’s too late. Once malicious AI models have gobbled up all of our publicly available personal data, it’s too late to take it back.”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW