admin

2024-05-17 08:33:21
BLOG VIEWS : 928

Introduction to deep learning




  • Machine learning is a collection of algorithms and tools that help machines understand patterns within data and use this underlying structure to perform reasoning about a given task. There are many ways that machines aim to understand these underlying patterns. But how does machine learning relate to deep learning? In this article, we provide an overview of how deep learning fits into this realm and also discuss some of its applications and challenges.

    There is a growing misconception that deep learning is a competitve technology to the machine learning domain. In this article, we discuss some of these myths and explain how deep learning is related to machine learning and the advantages of using deep learning algorithms in certain applications.

    To put things in perspective, deep learning is a subdomain of machine learning. With accelerated computational power and large data sets, deep learning algorithms are able to self-learn hidden patterns within data to make predictions.

    In essence, you can think of deep learning as a branch of machine learning that’s trained on large amounts of data and deals with many computational units working in tandem to perform predictions.

    Applications

    Deep learning has a plethora of applications in almost every field such as health care, finance, and image recognition. In this section, let’s go over a few applications.

    • Health care: With easier access to accelerated GPU and the availability of huge amounts of data, health care use cases have been a perfect fit for applying deep learning. Using image recognition, cancer detection from MRI imaging and x-rays has been surpassing human levels of accuracy. Drug discovery, clinical trial matching, and genomics have been other popular health care-based applications.

    • Autonomous vehicles: Though self-driving cars is a risky field to automate, it has recently taken a turn towards becoming a reality. From recognizing a stop sign to seeing a pedestrian on the road, deep learning-based models are trained and tried under simulated environments to monitor progress.

    • e-commerce: Product recommendations has been one of the most popular and profitable applications of deep learning. With more personalized and accurate recommendations, customers are able to easily shop for the items they are looking for and are able to view all of the options that they can choose from. This also accelerates sales and thus, benefits sellers.

    • Personal assistant: Thanks to advancements in the field of deep learning, having a personal assistant is as simple as buying a device like Alexa or Google Assistant. These smart assistants use deep learning in various aspects such as personalized voice and accent recognition, personalized recommendations, and text generation.

    Clearly, these are only a small portion of the vast applications to which deep learning can be applied. Stock market predictions and weather predictions are also equally popular fields in which deep learning has been helpful.

    Challenges in deep learning

    Though deep learning methods gained immense popularity in the last 10 years or so, the idea has been around since the mid-1950s when Frank Rosenblatt invented the perceptron on an IBM® 704 machine. It was a two-layer-based electronic device that had the ability to detect shapes and do reasoning. Advancements in this field in recent years are primarily because of the increase in computing power and high-performance graphical processing units (GPUs), coupled with the large increase in the wealth of data these models have at their disposal for learning, as well as interest and funding from the community for continued research. Though deep learning has taken off in the last few years, it does come with its own set of challenges that the community is working hard to resolve.

    Need for data

    The deep learning methods prevalent today are very data hungry, and many complex problems such as language translation don’t have sophisticated data sets available. Deep learning methods to perform neural machine translation to and from low-resource languages often perform poorly, and techniques such as domain adaptation (applying learnings gained from developing high-resource systems to low-resource scenarios) have shown promise in recent years. For problems such as pose estimation, it can be arduous to generate such a high volume of data. The synthetic data the model ends up training on differs a lot in reality from the “in-the-wild” setup in which the model ultimately needs to perform.

    Explainability and fairness

    Even though deep learning algorithms have proven to beat human-level accuracy, there is no clear way to backtrack and provide the reasoning behind each prediction that’s made. This makes it difficult to use in applications such as finance where there are mandates to provide the reasoning behind every loan that is approved or rejected.

    Another dimension that tends to be an issue is the underlying bias in the data itself, which can lead to poor performance of the model on crucial subsets of the data. Learning agents that use a reward-based mechanism sometimes stop behaving ethically because all they require to minimize system error is to maximize the reward they accrue. This example shows how the agent simply stopped playing the game and ended up in an infinite loop of collecting reward points. While it might be acceptable in a game scenario, wrong or unethical decisions can have a profound negative impact in the real world. A strong need exists to allow models to learn in a balanced fashion.

    IBM has an open source toolkit, AI360, which is a toolkit to detect, investigate, and mitigate bias in deep learning algorithms. As deep learning researchers, it’s important for us to keep these challenges in mind while designing and conducting these experiments.

     

    For details

    https://developer.ibm.com/technologies/artificial-intelligence/articles/an-introduction-to-deep-learning/