Starting today, Instagram will start warning users when they’re about to post a “potentially offensive” caption for a photo or video that’s being uploaded to their main feed, the company has announced. If an Instagram user posts something that the service’s AI-powered tools think could be hurtful, the app will generate a notification to say that the caption “looks similar to others that have been reported.” It will then encourage the user to edit the caption, but it will also give them the option of posting it unchanged.
The new feature builds upon a similar AI-powered tool that Instagram introduced for comments back in July. The company says that nudging people to reconsider posting potentially hurtful comments has had “promising” results in the company’s fight against online bullying.
Posting a potentially offensive comment will generate a warning that encourages you to edit it.
Unlike its other moderation tools, the difference here is that Instagram is relying on users to spot when one of their comments crosses the line. It’s unlikely to stop the platform’s more determined bullies, but hopefully it has a shot at protecting people from thoughtless insults.
Instagram says the new feature is rolling out in “select countries” for now, but it will expand globally in the coming months.
San Francisco (CNN Business)In 2010, artificial intelligence was more likely to pop up in dystopian science-fiction movies than in everyday life. And it certainly wasn't something people worried might take over their jobs in the near future.
A lot has changed since then. AI is now used for everything from helping you take better smartphone photos and analyzing your personality in job interviews to letting you buy a sandwich without paying a cashier. It's also becoming increasingly common — and controversial — when used for surveillance, such as facial-recognition software, and for spreading misinformation, as with deepfake videos that purport to show a person doing or saying something they didn't.
How did AI come to invade so many different parts of our lives over the last decade? The answer lies in technological advancements in the field, combined with cheaper, easier access to more powerful computers.
Much of the AI you encounter on a regular basis uses a technique known as machine learning, which is when a computer teaches itself by poring over data. More specifically, major developments over the last decade focused on a type of machine learning, called deep learning, that's modeled after the way neurons work in the brain. With deep learning, a computer might be tasked with looking at thousands of videos of cats, for instance, to learn to identify what a cat looks like (and, in fact, it was a big deal when Google figured out how to do this reliably in 2012).
"Ten years ago, deep learning was not on anybody's radar, and now it's in everything," said Pedro Domingos, a computer science professor at the University of Washington.
AI is still quite simplistic. A machine-learning algorithm, for instance, typically does just one thing and often requires mountains of data to learn how to do it well. A lot of work in the field of AI focuses on making machine learning systems better at generalizing and learning from fewer examples, Domingos said.
"We've come a thousand miles, but there's a million miles still to go," he said.
With a nod to those thousand miles already in the technological rear-view mirror, CNN Business took a look back at the last 10 years of AI's journey, highlighting six of the many ways it has impacted our lives.
These days, artificial intelligence is all over smartphones, from facial-recognition software for unlocking the handset to popular apps like Google Maps. Increasingly, companies like Apple and Google are trying to run AI directly on handsets (with chips specifically meant to help with AI-driven capabilities), so activities like speech recognition can be performed on the phone rather than on a remote computer — the kind of thing that can make it even faster to do things like translate words from one language to another and preserve data privacy.
General view of the Apple IPhone XR during the Covent Garden re-opening and iPhone XR launch at Apple store, Covent Garden on October 26, 2018 in London, England.
One deceptively simple-sounding example of this popped up in October, when Google introduced a transcription app called Recorder. It can record and transcribe, in real time. It knows what you're saying and identifies various sounds like music and applause; the recordings can later be searched by individual words. The app can run entirely on Google Pixel smartphones. Google said this was difficult to accomplish because it requires several pieces of AI that must work without killing the phone's battery life or taking up too much of its main processor. If consumers take a shine to the app, it could lead to yet more AI being squeezed onto our smartphones.
When Facebook began in 2004, it focused on connecting people. These days, it's fixated on doing so with artificial intelligence. It's become so core to the company's products that a year ago, Facebook's chief AI scientist, Yann LeCun, told CNN Business that without deep learning the social network would be "dust."
After years of investment, deep learning now underpins everything from the posts and ads you see on the site to the ways your friends can be automatically tagged in photos. It can even help remove content like hate speech from the social network. It's still got a long way to go, though, particularly when it comes to spotting violence or hate speech online, which is tricky for machines to figure out.
And Facebook isn't the only one; it's simply the biggest. Instagram, Twitter, and other social networks rely heavily on AI, too.
Any time you talk to Amazon's Alexa, Apple's Siri, or Google's Assistant, you're having an up-close-and-personal interaction with AI. This is most notable in the ways these helpers understand what you're saying and (hopefully) respond with what you want.
The rise of these virtual assistants began in 2011, when Apple released Siri on the iPhone. Google followed with Google Now in 2012 (a newer version, Google Assistant, came out in 2016).
But while many consumers took a shine to Apple's and Google's early computerized helpers, they were mostly confined to smartphones. In many ways, it was Amazon's Alexa, introduced in 2014 and embodied by an Internet-connected speaker called the Amazon Echo, that helped the virtual assistant market explode --— and brought AI to many more homes in the process.
Consider this: During just the third quarter of 2019, Amazon shipped 10.4 million Alexa-using smart speakers, making up the biggest single chunk (nearly 37%) of the global market for these gadgets, according to data from Canalys.
As AI has improved, so have its capabilities as a surveillance tool. One of the most controversial of these is facial recognition technology, which identifies people from live or recorded video or still photos, typically by comparing their facial features with those in a database of faces. It's been used in many different settings: at concerts, by police, and at airports, to name a few.
A display shows a facial recognition system for law enforcement during the NVIDIA GPU Technology Conference, which showcases artificial intelligence, deep learning, virtual reality and autonomous machines, in Washington, DC, November 1, 2017.
Facial recognition systems have come under growing scrutiny, however, due to concerns about privacy and accuracy. In December, for instance, a US government study found extensive racial bias in almost 200 facial recognition algorithms, with racial minorities much more likely to be misidentified than whites.
In the US, there are few rules governing how AI in general, and facial recognition in particular, can be deployed. So in 2019, several cities, including San Francisco and Oakland in California and Somerville in Massachusetts, banned city departments (including police) from using the technology.
AI is increasingly being used to diagnose and manage all kinds of health issues, from spotting lung cancer to keeping an eye on mental health problems and gastrointestinal issues. Though much of this work is still in the research or early-development stages, there are startups — such as Mindstrong Health, which uses an app to measure moods in patients who are dealing with mental health issues — already trying out AI systems with people.
AI detection scan used for lung cancer malignancy prediction.
Two startups in the midst of this are Auggi, a gut-health startup building an app to help track gastrointestinal issues, and Seed Health, which sells probiotics and works on applying microbes to human health. In November, they started collecting photos of poop from the general public that they intend to use to make a data set of human fecal images. Auggi wants to use these pictures to make an app that can use computer vision to automatically classify different types of waste that people with chronic gut-related problems — such as irritable bowel syndrome, or IBS — usually have to track manually with pen and paper.
Can AI create art? More and more often the answer is yes. Over the last 10 years, AI has been used to make musical compositions, paintings and more that seem very similar to the kinds of things humans come up with (though the jury is still out on whether a machine can actually possess creativity). And sometimes, that art can even be a big money maker.
Pierre Fautrel, co-founder of a group that produces art using AI, stands next to "Portrait of Edmond de Belamy," the first work produced by a machine to be sold at auction.
The print was created using a cutting-edge technique known as GANS, which involves two neural networks competing with each other to come up with something new based on a data set. In this case, the data set was a slew of existing paintings, while the new thing was the computerized artwork. GANS is also gaining popularity because it can be used to make deepfakes.
So what does it all mean for the Google products that so many of us rely on every day? Let's take a look at the impact on its three big categories.
What it means for Google search and services
Despite all the flashy things Alphabet and Google do around self-driving cars, fiber broadband, internet balloons and "other bets," the Google search engine and the massive advertising revenue it generates still pays virtually all the bills for the company. The vast majority (over 70%) of the company's revenue comes from search, although YouTube is becoming a much bigger contributor and the company's enterprise offerings in cloud computing and artificial intelligence are also growing.
Still, don't expect Pichai to change much with search, Google Maps, Google Apps or YouTube -- other than to continue their course of incrementally adding new improvements. The entire company hinges on the continued success of these services, so expect Pichai and his leaders to be very conservative in how they manage them. The biggest challenges are well known: how to deal with international government regulations, backlash from parents and health providers over screen time and how to work within the laws of authoritarian regimes when it comes to censorship and handing over data on dissidents, who are often fighting for democracy and human rights.
What it means for Android and Google devices
Pichai rose to prominence as the leader of the team that created the Chrome web browser and Chrome OS devices such as the original Chromebook Pixel laptop. When his role expanded to include the leadership of the Android team in 2013, most of us expected Chrome OS and Android to merge into one. The fact that it never happened tells us something about Pichai's pragmatism.
Despite fears that Google could eventually sideline Android and put most of its focus on Chrome OS -- where it has more control and makes more money -- Google has not only kept pace in upgrading and simplifying the Android user experience, but it's also invested in Android hardware by creating its own Pixel phones. Google has never deeply invested in marketing Pixel devices, however, likely out of fear of cannibalizing sales from important Android partners like Samsung.
As Alphabet CEO, Pichai will now come under more and more pressure from Wall Street to diversify Google's revenue. That will resurface the questions of whether Google should aggressively market its own Pixel devices at the expense of its partners or if it should evolve its mobile strategy from Android to Chrome OS-powered phones. Either option would be disruptive to loyal Android users.
What it means for Nest and Google Assistant
Google unified manyof its smart home products under the Nest brand in 2019, including its smart speakers and its Wi-Fi routers. Nest had originally operated independently within Google, but it's now been folded into Google's hardware division, essentially replacing the "Google Home" brand. Nest has basically become the Pixel brand of the smart home.
The way that Pichai has handled that -- after Nest bounced around inside Google and Alphabet for a few years -- again shows his pragmatism. In this case -- unlike with Android and Chrome OS -- it no longer made sense to have two brands and two teams working on similar stuff, so they've been combined.
Meanwhile, Google Assistant maintains its current branding because it's the voice version of traditional Google search, and spreads its wings across other platforms beyond smart home -- especially phones. The power of the Google brand itself remains one of the company's greatest assets, and it will be important to watch the way Pichai -- and the new regime he's likely to put in place at Alphabet -- chooses to use it in the years ahead.