AI’s hype and antitrust drawback is coming underneath scrutiny

Final Thursday, Senators Elizabeth Warren and Eric Schmitt launched a invoice geared toward stirring up extra competitors for Pentagon contracts awarded in AI and cloud computing. Amazon, Microsoft, Google, and Oracle at present dominate these contracts. “The way in which that the large get larger in AI is by sucking up everybody else’s information and utilizing it to coach and develop their very own methods,” Warren informed the Washington Submit

The brand new invoice would “require a aggressive award course of” for contracts, which might ban using “no-bid” awards by the Pentagon to corporations for cloud companies or AI basis fashions. (The lawmakers’ transfer got here a day after OpenAI introduced that its know-how can be deployed on the battlefield for the primary time in a partnership with Anduril, finishing a year-long reversal of its coverage in opposition to working with the navy.)

Whereas Large Tech is hit with antitrust investigations—together with the ongoing lawsuit in opposition to Google about its dominance in search, in addition to a brand new investigation opened into Microsoft—regulators are additionally accusing AI corporations of, nicely, simply straight-up mendacity. 

On Tuesday, the Federal Commerce Fee took motion in opposition to the smart-camera firm IntelliVision, saying that the corporate makes false claims about its facial recognition know-how. IntelliVision has promoted its AI fashions, that are utilized in each residence and industrial safety digicam methods, as working with out gender or racial bias and being skilled on thousands and thousands of pictures, two claims the FTC says are false. (The corporate couldn’t assist the bias declare and the system was skilled on solely 100,000 pictures, the FTC says.)

Every week earlier, the FTC made related claims of deceit in opposition to the safety big Evolv, which sells AI-powered safety scanning merchandise to stadiums, Okay-12 faculties, and hospitals. Evolv advertises its methods as providing higher safety than easy steel detectors, saying they use AI to precisely display screen for weapons, knives, and different threats whereas ignoring innocent objects. The FTC alleges that Evolv has inflated its accuracy claims, and that its methods failed in consequential instances, resembling a 2022 incident after they did not detect a seven-inch knife that was in the end used to stab a scholar. 

These add to the complaints the FTC made again in September in opposition to plenty of AI corporations, together with one which offered a device to generate faux product critiques and one promoting “AI lawyer” companies.