Major Branches of Artificial Intelligence (AI)

For those unaware, AI (artificial intelligence) is not a recently coined term, nor was it coined in the last few years

In the modern era, John McCarthy, at the Dartmouth Conference, coined the term “Artificial Intelligence” in 1956. But it was first conceptualised in 1943 by Warren McCulloch and Walter Pits. 

AI started gaining traction in the 21st century. It has since become one of the hot topics in every industry and has been subject to several modifications or iterations. 

This article discusses the main AI branches and how they can be distinguished based on their stages or functionality. Let’s get started. 

What is Artificial Intelligence (AI)?

John McCarthy defined AI as “the science and engineering of making intelligent machines.”

AI or artificial intelligence is the part of computer science that deals with the ability of machines or systems to emulate or mimic human behaviour.

It primarily deals with cognitive capacity and looks to understand language and reasoning to undertake problem-solving similar to how humans would do. 

Although AI is looking to automate a lot of what humans can do, it is not trying to be a human replacement. Instead, it is looking to utilise its ability to process quickly to work with humans in tandem and amplify their skills and contributions. 

This explains why it has progressively found usage across industries, and more and more businesses are looking to harness them to improve their performance. 

Stages of Artificial Intelligence

Before delving into the multiple branches of AI, we need to talk about their stages of evolution. We can categorise them into three phases based on their ability to mimic human characteristics, the underlying tech, real-world applications, and the theory of mind. 

Here are the three stages of AI – 

  • Stage 1 – Artificial Narrow Intelligence (ANI) – Also known as ‘Weak AI’, machines at this stage lack thinking abilities and can only perform a narrow set of pre-defined tasks. For example – Alexa and Siri.
  • Stage 2 – Artificial General Intelligence (AGI) – Also known as ‘Strong AI’, machines at this stage will have the ability to emulate human thinking and decision-making. We are yet to see products that fall into this category, but we can see some in the next few decades.  
  • Stage 3 – Artificial Super Intelligence (ASI) – Machines at this stage are believed to surpass human abilities. Currently only a hypothetical situation, but given how fast we are progressing, we cannot write it off completely!

We have gained expertise in Stage 1 and are doing extensive research in Stage 2. 

Types of AI based on functionality

We can also divide AI into several types based on its functionality. Here are some primary AI types based on the functions they perform – 

  • Reactive Machines AI – These are basic and unsupervised AI with the mere ability to react to situations. These cannot keep memories for decision-making. 
  • Limited Memory AI – These are supervised versions that can learn from past incidences and use them to create a good-fit model of reaction.
  • Theory of Mind AI – These emulate human-like thinking abilities and can hold meaningful conversations. These depend on human commands and use their learning abilities to interact. 
  • Self-aware AI – These AI match humans in terms of consciousness and are considered the most advanced versions. These will have superior cognitive capabilities and can create their own thoughts and reactions.

smiling asian entrepreneur standing in business of 2023 03 01 20 30 40 utc 1 scaled

Also read: How is AI used in data analytics (+ Tools and Examples)

Branches of Artificial Intelligence

The mode of application is how we primarily bifurcate the different branches of AI. Every branch has unique application capabilities and finds usage in different scenarios. 

We can also bifurcate the various artificial intelligence branches based on their complexity or mode of operation. 

Let us have a look at the most prominent AI branches – 

1. Neural Networks

A neural network is an advanced field of Machine Learning (ML) that uses high-dimensional data for solving more advanced problems. The methodology used in implementing neural networks is called deep learning

Neural networks are designed to emulate human neurological systems and are instrumental in solving real-world, complex problems. It does so through the use of identification of relational links between multiple variables involved in each scenario. 

For this, they have three primary parts – 

  • Units or Neurons – These are enclosures that contain weights and biases.
  • Weights or Parameters – These are numbers a neural network needs to work with.
  • Biases – It is a constant that is reached after multiplying features and weights. 

A neural network finds usage where the incoming data is complex and requires near human-like abilities to be decoded successfully. For example, Alexa and Siri use deep learning to understand and get acquainted with human behaviour. Uses of Neural Networks

As mentioned above, neural networks find multiple use cases across industries, but here are its most important applications – 

  • Computer vision – The ability of machines to extract details and insights from video and still content is called computer vision. From self-driving cars and facial recognition to CCTVs, it has been a widely availed neural network use case. 
  • Speech recognition – Neural networks can deduce human behaviour and can figure out pitch, tone, and accent and use it for further processing. Smart assistants like Siri and Alexa use speech recognition to perform a flurry of tasks.
  • Personalised recommendations – The ability of neural networks to store and analyse past events allows them to make accurate product or service recommendations to users. Many online services use social listening and track user cookies to make personalised recommendations to users. 
  • Handwriting analysis – Neural networks are also used for verifying an individual’s signature. They are trained to distinguish between the original and forged signs using image processing. It plays an integral role in forensics and other areas where automated verification occurs. 

2. Experts Systems 

Designed during the 1970s, expert systems are those that can emulate the decision-making abilities of human experts. It uses reasoning (if-then) and insights from past experiences to create a set of reactions for a given situation. 

They find usage in several scenarios where robust decision-making is needed, such as medical facilities, loan analysis, fraud detection, IT management, and more. 

3. Robotics 

Robotics is an interdisciplinary branch of AI that incorporates a myriad of engineering and science branches, such as electrical, mechanical, computer science, and more. These AI techs are used to create robots that can assist and emulate human behaviour with varying levels of automation. 

Robotics

Photo by Kindel Media

Robotics AI finds usage across industries, and their use cases vary from basic home cleaning to full-fledged human operations without any external influences. These are also helpful in industries that require assembling, as these can be infused with machine learning to attain higher levels of automation. 

4. Machine Learning 

Machine learning (ML) enables machines to automatically learn and improvise their behaviour on their own without being programmed by humans. The primary focus of it is to create programs that can access data and use it for proactive decision-making. 

Machine learning can be further subdivided into the following categories – 

  • Supervised machine learning – These can use what they have learned in the past to conjure future possibilities using labelled examples.
  • Unsupervised machine learning – These use information that is neither labelled nor classified to uncover hidden structures and draw inferences for more robust decision-making. 
  • Semi-supervised machine learning – These algorithms use a small amount of labelled combined with a large quantity of unlabelled data for memory building and decision-making.
  • Reinforcement machine learning – These use contexts and trial and error to determine the ideal behaviour for every given situation. 

5. Fuzzy Logic 

Fuzzy logic finds usage when it is difficult to ascertain if a condition is true or false. It tries to modify uncertain information by measuring the degree of correctness of the hypothesis (between 0 and 1).

Several industries and circumstances that demand eliminating the cloud of uncertainty for decision-making deploy fuzzy logic. It includes automatic gearboxes, medicines, sports, and more. 

6. Natural Language Processing (NLP)

A machine doesn’t understand human language by default which can create a major gap between inputs and outputs. NLP, or natural language processing, is an AI branch that seeks to eliminate this gap by helping systems decode what humans are trying to say. 

Given the vital problem that they solve, NLP finds usage across industries and for a plethora of purposes. It includes – 

  • Chatbots
  • Sentiment analysis
  • Document scanning
  • Comment moderation
  • Email and call classification

Conclusion

Given the new developments in the world of informatics, we can’t lay down all the branches of AI in a single article/blog post. But it is imperative for humans to understand the need for perfecting the existing branches instead of just looking to develop newer ones. 

Merely shifting from one tech to another will not only be useless for AI’s development but can cause significant roadblocks in its problem-solving abilities. Instead, we should focus on fine-tuning our existing skills and ensuring we can harness AI and its branches to solve our most important problems. 

Reach out to us at Accord for more information on how to get started with the Artificial Intelligence learning journey through our training courses in Singapore. 

Reviewed by

Comments are closed.