Monday, December 11, 2023
HomeNetwork NewsNeural networks explained by a computer scientist

Neural networks explained by a computer scientist

Editor’s note: One of the central technologies of artificial intelligence is neural networks. In this interview, Tam Nguyen, a professor of computer science at the University of Dayton, explains how neural networks, programs in which a series of algorithms try to simulate the human brain work.

What are some examples of neural networks that are familiar to most people?

There are many applications of neural networks. One common example is your smartphone camera’s ability to recognize faces.

Driverless cars are equipped with multiple cameras which try to recognize other vehicles, traffic signs and pedestrians by using neural networks, and turn or adjust their speed accordingly.

Neural networks are also behind the text suggestions you see while writing texts or emails, and even in the translations tools available online.

Does the network need to have prior knowledge of something to be able to classify or recognize it?

Yes, that’s why there is a need to use big data in training neural networks. They work because they are trained on vast amounts of data to then recognize, classify and predict things.

READ MORE: There but not there: Phishing emails using invisible text

In the driverless cars example, it would need to look at millions of images and video of all the things on the street and be told what each of those things is. When you click on the images of crosswalks to prove that you’re not a robot while browsing the internet, it can also be used to help train a neural network. Only after seeing millions of crosswalks, from all different angles and lighting conditions, would a self-driving car be able to recognize them when it’s driving around in real life.

More complicated neural networks are actually able to teach themselves. In the video linked below, the network is given the task of going from point A to point B, and you can see it trying all sorts of things to try to get the model to the end of the course, until it finds one that does the best job.

Some neural networks can work together to create something new. In this example, the networks create virtual faces that don’t belong to real people when you refresh the screen. One network makes an attempt at creating a face, and the other tries to judge whether it is real or fake. They go back and forth until the second one cannot tell that the face created by the first is fake.

Humans take advantage of big data too. A person perceives around 30 frames or images per second, which means 1,800 images per minute, and over 600 million images per year. That is why we should give neural networks a similar opportunity to have the big data for training.

How does a basic neural network work?

A neural network is a network of artificial neurons programmed in software. It tries to simulate the human brain, so it has many layers of “neurons” just like the neurons in our brain. The first layer of neurons will receive inputs like images, video, sound, text, etc. This input data goes through all the layers, as the output of one layer is fed into the next layer.

Image by Colin Behrens from Pixabay

Let’s take an example of a neural network that is trained to recognize dogs and cats. The first layer of neurons will break up this image into areas of light and dark. This data will be fed into the next layer to recognize edges. The next layer would then try to recognize the shapes formed by the combination of edges. The data would go through several layers in a similar fashion to finally recognize whether the image you showed it is a dog or a cat according to the data it’s been trained on.

READ MORE: How to understand and prevent VPN leaks

These networks can be incredibly complex and consist of millions of parameters to classify and recognize the input it receives.

Why are we seeing so many applications of neural networks now?

Actually neural networks were invented a long time ago, in 1943, when Warren McCulloch and Walter Pitts created a computational model for neural networks based on algorithms. Then the idea went through a long hibernation because the immense computational resources needed to build neural networks did not exist yet.

Recently, the idea has come back in a big way, thanks to advanced computational resources like graphical processing units (GPUs). They are chips that have been used for processing graphics in video games, but it turns out that they are excellent for crunching the data required to run neural networks too. That is why we now see the proliferation of neural networks.

• Tam Nguyen is an Assistant Professor at Department of Computer Science, University of Dayton (UD). This article originally appeared on TheConversation

Tam Nguyen
Tam Nguyen
I am Tam Nguyen, an Assistant Professor at Department of Computer Science, University of Dayton (UD). I am leading Vision and Mixed Reality Lab (VMR) in UD. I received PhD degree in National University of Singapore (NUS) in 2013. I have 10+ years of working experience in both research and industrial sectors. My research topics include computer vision, machine learning, multimedia content analysis, and mixed reality. I have authored and co-authored 70+ research papers with 1400+ citations (according to Google Scholar). I am the recipient of numerous awards including Young Vietnamese of the year 2005, the 2nd prize winner of ICPR 2012 contest on action recognition, the best technical demonstration from ACM Multimedia 2012, the best student paper award at NUS-GSS 2013, Singapore Polytechnic R&D commendation award 2015, and the third prize at CVPR 2017 and CVPR 2019 video object segmentation competition.

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

You might also like

Stay Connected

Must Read

Related News

Share it with your friends:

Neural networks explained by a computer scientist