EU wants Big Tech to deploy AI tech on its own terms

Why it matters: EU’s Margrethe Vestager will soon reveal the bloc’s plans to bolster its technological capabilities and keep up with China and the US. In the meantime, Big Tech is lobbying for a softer approach in deploying AI without many regulatory barriers, but given Vestager’s track record of hammering big companies with hefty fines, they’re going to be stuck with a choice between leaving money on the table and complying with stricter rules.

While China and the US are fighting over information and economic dominance, the European Union is scrambling to lay down a new set of rules that will govern how tech companies operate in the region, especially when it comes to privacy and security protections, transparency, and establishing a level playing field for all players involved.

On Wednesday, EU’s newly appointed vice-president of digital policy Margrethe Vestager is expected to unveil a draft of the new regulation, along with a set of recommendations for future rules regarding the use of AI in high-risk sectors like transportation, manufacturing, energy, biometric identification, and healthcare. The main focus of the legislation is to establish clear safety and liability rules for tech giants that are working on disrupting those fields.

The general perception is that Europe is far behind the US and China in terms of AI development. However, according to the McKinsey Global Institute, the EU’s gap in digital tech can be overcome by funding startup development, digital transformation of non-tech companies, and encouraging investors to bring in more capital. As for talent, there are an estimated 5.7 million software developers in the EU, which is impressive when you think that the US only has around 4.4 million.

The EU can’t pass on the opportunity to generate an extra €900 billion by 2030 ($975 billion), but MEPs want to chart a course for AI adoption that also factor in the potential dangers of misuse, societal bias, the probability of errors in automated decision-making systems, and consumer protections.

Silicon Valley giants are worried about the new development and have programmed meetings with MEPs to discuss the new rules. Facebook CEO Mark Zuckerberg has already tried to address the issue of handling harmful content as a way to boost public trust, but his proposals come at a time when it’s being probed for violation of GDPR and questionable data collection practices.

Vestager noted that she isn’t worried about algorithms that reshuffle content streams to better match consumers’ interests, but rather about things like facial recognition or automated systems that assess who can get a loan, which are known to be very prone to bias and error.