The story of AI, told by the people who invented it

[ad_1]

Welcome in When I was there, a new oral history project from In machines we trust podcast. It contains stories about how breakthroughs in artificial intelligence occurred and the calculations told by the people who witnessed them. In this first episode, we meet Joseph Atticus, who helped create the first commercially viable face recognition system.

Loans:

This episode was produced by Jennifer Strong, Anthony Green and Emma Silekens with the help of Lindsay Muscato. Edited by Michael Riley and Matt Honan. It is mixed by Garret Lang, with sound design and music by Jacob Gorski.

Full transcript:

[TR ID]

Jennifer: I’m Jennifer Strong, the host of In machines we trust.

I want to tell you about something we worked on a little behind the scenes here.

Is called When I was there.

This is an oral history project that includes stories about how breakthroughs in artificial intelligence and computing have taken place … as people who have witnessed them tell.

Joseph Attic: And when I walked into the room, she noticed my face, took it out of the background, and said, “I see Joseph,” and that was when the hair on my back … I felt something happen. We witnessed.

Jennifer: We start with a man who helped create the first face recognition system that was commercially viable … back in the ’90s …

[IMWT ID]

I’m Joseph Attic. Today, I am the Executive Chairman of ID for Africa, a humanitarian organization that focuses on providing a digital identity to people in Africa to access services and exercise their rights. But I have not always been in the humanitarian field. After receiving my doctorate in mathematics, my associates and I made some fundamental breakthroughs that led to the first commercial facial recognition. That’s why people call me the founder of face recognition and the biometrics industry. The algorithm for how the human brain would recognize familiar faces became clear while we were doing research, math research, while I was at the Princeton Institute for Advanced Research. But it was far from an idea how you would implement such a thing.

It was a long period of months of programming and failure and programming and failure. And one night, early in the morning, we had just finalized a version of the algorithm. We sent the source code for compilation to get the execution code. And we went out, I went out to go to the toilet. And then when I went back to the room and the source code was compiled by the machine and it was back. And usually after you compile it, it starts automatically, and when I walked into the room, he noticed a man moving in the room, and he noticed my face, took it out of the background, and said, “I see Joseph.” and that was the moment when the hair on my back – I had the feeling that something had happened. We witnessed. And I started calling the other people who were still in the lab, and each of them would come into the room.

And he would say, “I see Norman. I would see Paul, I would see Joseph. “And we would somehow take turns running around the room, just to see how much he could notice in the room. It was, it was a moment of truth in which I would say that a few years of work finally led to a breakthrough, despite that theoretically no further breakthrough was needed.Only the fact that we figured out how to implement it and finally saw that the ability to act is very, very rewarding and satisfying.We had developed a team that is more of a development team than a development team. a research team that focused on putting all of these capabilities into a computer platform, and that was the birth, indeed, the birth of commercial facial recognition, I would say, in 1994.

My anxiety started very quickly. I saw a future in which I had nowhere to hide with the proliferation of cameras everywhere and the commodification of computers and computer processing capabilities getting better. So in 1998, I lobbied the industry and said we needed to bring together principles of responsible use. And I felt good for a while, because I felt we were fine. I felt we had entered a responsible code to use, followed by any conversion. However, this code does not stand the test of time. And the reason for this is that we did not expect the appearance of social media. In general, at the time we created the code in 1998, we said that the most important element in the face recognition system was the tagged database of famous people. We said that if I am not in the database, the system will be blind.

And it was difficult to build the database. At most, we could build thousands, 10,000, 15,000, 20,000, because each image had to be scanned and entered by hand – the world we live in today, we are now in a mode where we have allowed the beast from the bag by feeding him billions of faces and helping him by tagging. Um, we are now in a world where any hope of controlling and requiring everyone to take responsibility for using facial recognition is difficult. And at the same time, there is no shortage of celebrities on the Internet, because you can just scrape, as happened recently from some companies. So I started panicking in 2011 and wrote a described article saying it was time to push the panic button because the world was moving in a direction where face recognition would be ubiquitous and faces would be accessible everywhere. data base.

And then people said I was an alarmist, but today they realize that’s exactly what’s happening today. So where do we go from here? I lobbied for legislation. I lobbied for legal frameworks that oblige you to use someone’s face without their consent. So this is no longer a technological problem. We cannot limit this powerful technology by technological means. There must be some legal framework. We cannot allow technology to go too far ahead of us. In front of our values, in front of what we consider acceptable.

The question of consent continues to be one of the most difficult and challenging questions when dealing with technology, just notifying someone does not mean it is enough. For me, consent must be informed. They need to understand the implications of what that means. And not just to say that we registered and that was enough. We told people and if they didn’t want to, they could go anywhere.

I also find that there is, it’s so easy to be seduced by shiny technological features that can give us a short-term advantage in our lives. And then we admit that we have given up something that is too valuable. And so far we have desensitized the population and we are reaching a point where we cannot back down. It bothers me. I’m worried about the fact that face recognition through the work of Facebook and Apple and others. I am not saying that all this is illegitimate. Many of them are legitimate.

We have reached a point where the general public may have become glamorous and desensitized because they see it everywhere. And maybe in 20 years you’ll be out of your house. You will no longer have the expectation that you will not be. He will not be recognized by the dozens of people you cross along the way. I think at this point the public will be very worried because the media will start reporting on cases where people have been persecuted. People were targeted, people were even selected based on their net worth on the street and abducted. I think this is a big responsibility in our hands.

And so I think the issue of consent will continue to haunt the industry. And while this issue will not be the result, it may not be resolved. I think we need to set limits on what can be done with this technology.

My career has also taught me that going too far is not good because face recognition as we know it today was actually invented in 1994. But most people think it was invented by Facebook and the machine learning algorithms that are it is now spreading around the world. In principle, at some point, I had to resign as CEO because I was restricting the use of technologies that my company would promote because of the fear of negative consequences for humanity. That’s why I feel that scientists need to have the courage to design for the future and see the consequences of their work. I’m not saying they have to stop making breakthroughs. No, we must act with full force, make more breakthroughs, but we must also be honest with ourselves and basically warn the world and politicians that this breakthrough has pros and cons. Therefore, using this technology, we need some kind of guidelines and frameworks to make sure that it is channeled for a positive application and not a negative one.

Jennifer: I was there when … is an oral history project involving stories of people who have witnessed or made breakthroughs in artificial intelligence and computers.

Do you have a story to tell? Do you know anyone who does? Email us at podcasts@technologyreview.com.

[MIDROLL]

[CREDITS]

Jennifer: This episode was shot in New York in December 2020 and was produced by me with the help of Anthony Green and Emma Silekens. Edited by Michael Riley and Matt Honan. Our mix engineer is Gareth Lang… with sound design and music by Jakob Gorski.

Thanks for the hearing, I’m Jennifer Strong.

[TR ID]

[ad_2]

Source link

Leave a Reply

Your email address will not be published.