Skip to main content

How much all-seeing AI surveillance is too much?

In this April 23, 2018, photo, Ashley McManus, global marketing director of the Boston-based artificial intelligence firm, Affectiva, demonstrates facial recognition technology that is geared to help detect driver distraction, at their offices in Boston. Recent advances in AI-powered computer vision have spawned startups like Affectiva, accelerated the race for self-driving cars and powered the increasingly sophisticated photo-tagging features found on Facebook and Google. BOSTON (AP) — When a CIA-backed venture capital fund took an interest in Rana el Kaliouby's face-scanning technology for detecting emotions, the computer scientist and her colleagues did some soul-searching — and then turned down the money. "We're not interested in applications where you're spying on people," said el Kaliouby, the CEO and co-founder of the Boston startup Affectiva. The company has trained its artificial intelligence systems to recognize if individuals are happy or sad, tired or angry, using a photographic repository of more than 6 million faces. Recent advances in AI-powered computer vision have accelerated the race for self-driving cars and powered the increasingly sophisticated photo-tagging features found on Facebook and Google. But as these prying AI "eyes" find new applications in store checkout lines, police body cameras and war zones, the tech companies developing them are struggling to balance business opportunities with difficult moral decisions that could turn off customers or their own workers. El Kaliouby said it's not hard to imagine using real-time face recognition to pick up on dishonesty — or, in the hands of an authoritarian regime, to monitor reaction to political speech in order to root out dissent. But the small firm, which spun off from an MIT research lab, has set limits on what it will do. The company has shunned "any security, airport, even lie detection stuff," el Kaliouby said. Instead, Affectiva has partnered with automakers trying to help tired-looking drivers stay awake, and with consumer brands that want to know if people respond to a product with joy or disgust. Such queasiness reflects new qualms about the capabilities and possible abuses of all-seeing, always watching AI camera systems — even as authorities are growing more eager to use them. In the immediate aftermath of Thursday's deadly shooting at a newspaper in Annapolis, Maryland, police said they turned to face recognition to identify the uncooperative suspect. They did so by tapping a state database that includes mug shots of past arrestees and, more controversially, everyone who registered for a Maryland driver's license. Initial information given to law enforcement authorities said that police had turned to facial recognition because the suspect had damaged his fingerprints in an apparent attempt to avoid identification. That report turned out to be incorrect and police said they used facial recognition because of delays in getting fingerprint identification. In June, Orlando International Airport announced plans to require face-identification scans of passengers on all arriving and departing international flights by the end of this year. Several other U.S. airports have already been using such scans for some, but not all, departing international flights. Chinese firms and municipalities are already using intelligent cameras to shame jaywalkers in real time and to surveil ethnic minorities , subjecting some to detention and political indoctrination. Closer to home, the overhead cameras and sensors in Amazon's new cashier-less store in Seattle aim to make shoplifting obsolete by tracking every item shoppers pick up and put back down. Concerns over the technology can shake even the largest tech firms. Google, for instance, recently said it will exit a defense contract after employees protested the military application of the company's AI technology. The work involved computer analysis of drone video footage from Iraq and other conflict zones. Similar concerns about government contracts have stirred up internal discord at Amazon and Microsoft. Google has since published AI guidelines emphasizing uses that are "socially beneficial" and that avoid "unfair bias." Amazon, however, has so far deflected growing pressure from employees and privacy advocates to halt Rekognition, a powerful face-recognition tool it sells to police departments and other government agencies. Saying no to some work, of course, usually means someone else will do it. The drone-footage project involving Google, dubbed Project Maven, aimed to speed the job of looking for "patterns of life, things that are suspicious, indications of potential attacks," said Robert Work, a former top Pentagon official who launched the project in 2017. While it hurts to lose Google because they are "very, very good at it," Work said, other companies will continue those efforts. Commercial and government interest in computer vision has exploded since breakthroughs earlier in this decade using a brain-like "neural network" to recognize objects in images. Training computers to identify cats in YouTube videos was an early challenge in 2012. Now, Google has a smartphone app that can tell you which breed. A major research meeting — the annual Conference on Computer Vision and Pattern Recognition, held in Salt Lake City in June — has transformed from a sleepy academic gathering of "nerdy people" to a gold rush business expo attracting big companies and government agencies, said Michael Brown, a computer scientist at Toronto's York University and a conference organizer. Brown said researchers have been offered high-paying jobs on the spot. But few of the thousands of technical papers submitted to the meeting address broader public concerns about privacy, bias or other ethical dilemmas. "We're probably not having as much discussion as we should," he said. Startups are forging their own paths. Brian Brackeen, the CEO of Miami-based facial recognition software company Kairos, has set a blanket policy against selling the technology to law enforcement or for government surveillance, arguing in a recent essay that it "opens the door for gross misconduct by the morally corrupt." Boston-based startup Neurala, by contrast, is building software for Motorola that will help police-worn body cameras find a person in a crowd based on what they're wearing and what they look like. CEO Max Versace said that "AI is a mirror of the society," so the company only chooses principled partners. "We are not part of that totalitarian, Orwellian scheme," he said.

Comments

Popular posts from this blog

Ring Alarm review: A great DIY home security system with the potential to become even better

Ring builds some of our favorite video doorbells and security camera/outdoor lighting mashups. Now the company—recently acquired by Amazon—is moving inside the home with a strong and inexpensive DIY home security system: Ring Alarm. It’s a fantastic product today, and Ring says it will only get better with time. Ring Alarm is positioned as a mainstream home security system, and while you won’t find a great deal of innovation here (there’s nothing like the Nest Detect sensor that comes with the much-more-expensive Nest Secure system , for example), it’s already equipped with everything it needs to grow into a comprehensive smart home system. Ring Alarm doesn’t support smart lighting controls, door locks, thermostats, garage-door openers, or other common smart home products today, and there’s a very short list of supported third-party products. But it lacks nothing needed to support those and similar devices down the road. And in an intervi...

Study: Majority of U.S. Broadband Households Concerned About Security of IoT Devices

As Internet-connected devices become more ubiquitous, security and privacy concerns of end users are also on the rise. Simply installing security systems in smart homes is no longer enough. Security integrators must also consider bolstering cybersecurity measures when installing their systems. A recently released whitepaper from IoT research firm Parks Associates, titled “Residential Security and Encryption: Setting the Standard, Protecting Consumers,” reveals that 64% of U.S. broadband households are concerned about security and privacy when using their connected devices. Parks Associates also found that the majority of homeowners assume security integrators are addressing their cybersecurity concerns. In fact, a Parks Associates survey of U.S. security owners found 63% of professionally monitored subscribers believe the wireless signals from their system are encrypted, even though encryption is currently not the industry-wide standard. While the whitepaper outlines a few ...

Security company in Bellevue vandalized

BELLEVUE, Neb. (KMTV) - An Omaha home security company based out of Bellevue is using its own camera to help track down a thief who stole items from outside their building. Chris Malmberg, the owner of Omaha Security Systems Inc. says an unidentified man stole nearly $300 worth of landscaping Sunday night. "The motion sensors went off and I got a notification on my phone that he was there. We saw him take the plants, we watched him drive away and then immediately contacted police," said Malmberg. "He was real nervous, but he ended up stealing shrubbery, Hosta plants, I mean - never in my life have I ever known somebody to steal landscaping." Malmberg said the plants could be replaced, but what's frustrating is that this is one of several vandalism incidents his business has experienced since OSSI moved into the building near Jefferson and Mission Ave. "We've had vandalism, we've had items stolen, or attempted to be stolen, with the security that we...