Facial Recognition photograph

Facial Recognition

Use attributes for filter !
Movies/Shows Silicon Valley
Air dateApril 22, 2018
Previous episode Tech Evangelist
Next episode Artificial Emotional Intelligence
Date of Reg.
Date of Upd.
ID2035699
Send edit request

About Facial Recognition


Overshadowed by Jared in an on-camera interview, Richard's confidence wavers further when Laurie and Monica force him to work with Eklow, a new artificial-intelligence company; Gavin questions his future beyond Hooli.

Shop owners hope that tech can deter thieves

Shop owners hope that tech can deter thieves
Nov 17,2023 1:11 am

... " We ve heard examples of businesses using Facial Recognition to prevent retail crimes, " says Tina McKenzie, Policy Chair of the Federation of Small Businesses (FSB), " but to many independent shop owners, this may seem like a luxury item - especially with soaring inflation and high interest rates...

Rise in shoplifting: ‘Her son was crying and hungry so she stole food'

Rise in shoplifting: ‘Her son was crying and hungry so she stole food'
Nov 15,2023 4:21 am

... Shoplifting will soon be treated like organised crime as part of a Police have committed to attending more crime scenes and using Facial Recognition software to target offenders after retailers complained of a failure to tackle a rise in shoplifting...

Beyoncé's Cardiff gig crowd was scanned for paedophiles

Beyoncé's Cardiff gig crowd was scanned for paedophiles
Nov 8,2023 12:11 pm

...By Shelley PhelpsBBC NewsFacial Recognition was used on crowds attending a Beyoncé concert in Cardiff to scan for paedophiles and terrorists...

Rishi Sunak: AI firms cannot 'mark their own homework'

Rishi Sunak: AI firms cannot 'mark their own homework'
Nov 1,2023 2:31 pm

... Speaking ahead of the event in London, US Vice President Kamala Harris said that world leaders " must address the full spectrum of AI risks to humanity" and listed examples of faulty algorithms in healthcare, the use of AI in making " deepfakes" misinformation and biased Facial Recognition...

Police to treat shoplifting like organised crime

Police to treat shoplifting like organised crime
Oct 23,2023 11:11 am

... Under the plan, police have committed to attend more crime scenes and use Facial Recognition to target offenders...

Supernova festival: How massacre unfolded from verified video and social media

Supernova festival: How massacre unfolded from verified video and social media
Oct 9,2023 4:31 pm

... BBC Verify has pieced together the events of the weekend s festival bloodbath using video and social media posts that we have verified, and Facial Recognition technology...

AI facial recognition: Campaigners and MPs call for ban

AI facial recognition: Campaigners and MPs call for ban
Oct 5,2023 7:31 pm

...By Imran Rahman-Jones & Liv McMahonTechnology reporters, BBC NewsPolice and private companies should " immediately stop" the use of Facial Recognition surveillance, says a group of politicians and privacy campaigners...

Police access to passport photos 'risks public trust'

Police access to passport photos 'risks public trust'
Oct 4,2023 7:21 am

... " But civil liberties groups, who have already raised concerns about the existing use of Facial Recognition technology by the police, said using passport photos risks exacerbating them...

Google tackles the black box problem with Explainable AI

Sep 26,2023 10:41 am

Prof Moore introduced Google Cloud's Explainable AI in London

There is a problem with Artificial Intelligence .

It can be amazing at churning through gigantic amounts of data to solve challenges That humans struggle with. But understanding how it makes its decisions is often very difficult to do, if not impossible.

That means when an AI model works it is not as easy as it should be to make further refinements, and when it exhibits odd behaviour it can be hard to fix.

But at an event in London This Week , Google 's Cloud Computing division pitched a new facility That it hopes will give it The Edge on Microsoft and Amazon , which dominate the sector. Its name:

To start with, it will give information about The Performance and potential shortcomings of face- and object-detection models. But In Time The Firm intends to offer a wider set of insights to help make the "thinking" of AI algorithms less mysterious and therefore more trustworthy.

"Google is definitely The Underdog behind Amazon Web Services and Microsoft Azure in terms of The Cloud platform space, but for AI workloads I wouldn't say That 's the case - particularly for retail clients," commented Philip Carter from the consultants IDC.

"There's a bit of an arms race around AI. . and in some ways Google could be seen to be ahead of The Other Players . "

The Explainable AI cards will outline The Performance and limitations of the algorithms involved

Prof Andrew Moore leads Google Cloud's AI division.

He told the BBC The Secret behind The Breakthrough was "really cool fancy maths".

The transcript below has been edited for clarity and length:

Can you explain what led to Explainable AI?

One of the things which drives us crazy at Google is we often build really accurate Machine Learning models, but we have to understand why they're doing what they're doing. And in many of the large systems we built for our smartphones or for our search-ranking systems, or question-answering systems, we've internally worked hard to understand what's going on. Now we're releasing many of those tools for the external world to be able to explain the results of Machine Learning as well. The era of black box Machine Learning is behind us.

How do you go about doing That - it's not as though you can peer into a neural net and see why an input became an output?

The main question is to do these things called counterfactuals, where the neural network asks itself, for example, 'Suppose I hadn't been able to look at the shirt colour of the person walking into the store, would That have changed my estimate of how quickly they were walking?' By doing many counterfactuals, it gradually builds up a picture of what it is and isn't paying attention to when it's making a prediction.

Google hopes its model cards will make it easier for developers to debug and enhance their AI-based systems

This is really important, isn't it from a confidence point of view? If we're going to trust not just our businesses, but our lives to Artificial Intelligence algorithms, it's No Good if, when things go wrong, we can't Work Out why.

Yes. It's really important for societal reasons and fairness reasons and safety reasons. But I will say That no self-respecting AI practitioner would ever release a safety-critical Machine Learning system without having additional guardrails on it beyond just having Explainable AI.

To be clear, are you saying Google has completely solved The Black box problem, or just That you're shining a bit of light in there?

With the new Explainable AI tools we're able to help data scientists do strong diagnoses of what's going on. But we have not got to The Point where there's a full explanation of what's happening. For example, many of The Questions about whether one thing is causing Something or correlated with Something - those are closer to philosophical questions than things That we can purely use technology for.

One AI service you aren't offering clients is Facial Recognition . You've limited yourselves instead to letting clients detect but not recognise faces, with an exception made for those of celebrities. Microsoft and Amazon , by contrast, allow users to build more general Facial Recognition capabilities into their tools. Why is your approach Different ?

Amazon 's Rekognition tool offers both Facial Recognition and analysis features

In general, within Google , we understood how important it is That Artificial Intelligence is applied responsibly. And so, our chief executive Sundar Pichai commissioned a set of a principles That we operate with. They include the fact That we should never be doing harm, and That we should be making sure That the decisions of the systems are unbiased, fair and accountable. As a result of this it does mean That we are very careful. And it does sometimes come across That we are reluctant to just release Something and hope That it works because we subject everything to a battery of tests to make sure they are working in a way That 's desirable.

Switching tack. Before you took on this role you did AI work for the US Department of Defense. And you joined soon after Google pulled out of a tie-up with The Pentagon to label drone footage - Do you think Google 's decision to drop Project Maven was wrong?

That was before my time. So I'm not going to comment on That specific decision.

I will say That one of my roles is to serve on the United States Artificial Intelligence Congressional Commission on AI for National Security . And myself, and many other folks throughout the industries understand That we technology providers do have an obligation to help protect countries and societies, as well as producing consumer products as well.

A couple of weeks ago, our chief legal counsel, Kent Walker, to help out in aspects of National Security which will make people safer.

But Google Cloud is ruling out work on weapon systems?

Google 's AI principles say That they're not going to be working on offensive weapons systems.

So do you think That Google should be pursuing military or other National Security contracts in The Future ?

I don't want to talk about any specific contracts. But for example, Google is actively helping out with A Question of "deepfake" detection, which is this new fear That artificially constructed videos or images might become so realistic That they actually cause societal problems. And so we're partnering with a major Government Agency in the United States to help deal with That potential.

The decision to abandon Project Maven followed internal opposition to the effort from many Google employees. Do you agree with The View of others, including Microsoft president Brad Smith , That while it's worth listening to workers' concerns you also sometimes need to push back against employee activism?

Thousands of Google employees had signed an open letter to complain about its involvement in Project Maven

One of the things I love about Google , and why I chose to return to Google to work is That it is full of lots of creative voices. And pretty much everything we do, including the design of The Shape of buttons on a front-end system, we end up having massive internal arguments about. Eventually you do have to make a decision One Way or The Other . The idea of doing top-down management is completely out of Google 's culture. But knowing That people are going to disagree and having leadership commit is also Something That we are very clear That we do do.



google, artificial intelligence

Source of news: bbc.com

Related Persons

Next Profile ❯