Saltar al contenido

What Is Artificial Intelligence

What is artificial intelligence (AI)? Despite recent headlines, AI is not a new technology.

In fact, one of the earliest and broadest uses of AI was in military tactics during the Pacific campaign of World War II.

But today you will be, or at least I am, totally «blown away» by the things we can do today with this technology. And the best is yet to come.

Surely you are interested in knowing how to create art with Artificial Intelligence, in AI Drawings with DALL-E 2 – Draw what You Want with this Artificial Intelligence.

¿Qué es la inteligencia artificial?

Artificial intelligence relies on computers and devices to mimic the problem-solving and decision-making capabilities of the human mind.

There are many definitions of artificial intelligence (AI) that have emerged over the past few decades.

John McCarthy offers the following definition in this 2004 article: «It is the science and engineering of making intelligent machines, especially intelligent computer programs.

It is related to the similar task of using computers to understand human intelligence, but AI need not limit itself to methods that are biologically observable.»

In 1950, Alan Turing proposed a test to determine whether a machine is capable of thinking.

The most famous example of the test is known as the «Turing Test.» In this test, a human interrogator will ask questions and the computer and human will attempt to answer.

If the interrogator cannot tell which is human and which is a computer, then we have to conclude that it is capable of making intelligent decisions.

We often say that we provide a human approach to help you with your digital needs.

– Systems that act like humans

– Systems that think like humans

-Systems that think rationally

-Systems that act rationally Alan Turing’s definition would have fallen into the category of «systems that act like humans».

In its simplest form, artificial intelligence can be defined as the combination of computation and collected data to enable problem solving.

Expert systems, an early success within the AI world, attempted to duplicate the decision-making process of a human being.

At first, extracting and documenting human knowledge was a time-consuming process.

Today’s artificial intelligence encompasses the fields of machine learning and deep learning, which are often mentioned together with AI.

This type of AI is based on algorithms that typically make predictions or classifications based on the input data provided.

Deep learning has greatly improved the quality of expert systems, making them easier to create.

Artificial intelligence often goes unnoticed in our day-to-day lives. It can be found in programs such as Google Search, voice recognition and Amazon recommendations.

As AI development continues to evolve, it is bound to generate excitement.

This happens with any new technology, as noted in Gartner’s hype cycle. Autonomous cars and personal assistants follow a typical progression of innovation:

Overwhelming enthusiasm followed by disillusionment before reaching relevance and market dominance. As Lex Fridman (01:08:15) points out in his 2019 MIT lecture, we are still at the peak of inflated expectations, approaching the trough of disillusionment.

Advantage and Disadvantage of AI – Artificial Intelligence.

Artificial intelligence is a powerful new type of software capable of doing many things. It can learn new skills, be creative and even think for you.

However, like all powerful things, there are also some downsides.

AI is a broad concept that can encompass several different technologies, including machine learning. If you are interested in learning more about the different advantages and disadvantages of AI, read on.

We’ve created the following article to talk about advantages and disadvantages of AI.

Advantage Artificial Intelligence

An artificial intelligence is a system that mimics the processes of human intelligence, such as learning and problem solving.

AIs have been around for decades, but recent advances in machine learning have resulted in more sophisticated and accessible AI systems, and the most notable advantages are as follows.

1. Artificial intelligence introduces new techniques to solve problems.

2. Artificial Intelligence has the potential to redefine how we interact with technology and could lead to more powerful and useful computers.

3. Machines can handle information better than humans.

4. It is very helpful for the conversion of information into knowledge.

5. It improves work efficiency and reduces the duration of time it takes to complete a task compared to if a human were to do it.

Disadvantage Artificial Intelligence

As every bright side has a darker version. Artificial Intelligence also has some disadvantages. Let’s take a look at some of them.

1. However, the implementation cost of AI is very high and development difficulties make it difficult to justify the investment for many businesses.

2. Robots are one of the implementations of Artificial intelligence, and they can replace jobs and lead to unemployment.

3. Machines can easily lead to destruction if they are put in the wrong hands; the results can be hazardous for human beings.

Weak AI vs. strong AI. Types of artificial intelligence

Weak AI, also called narrow AI or Artificial Narrow Intelligence (ANI), is AI trained to perform specific tasks.

It powers most of the AI around us today and is anything but weak. Weak AI enables some powerful applications such as Apple’s Siri, Amazon’s Alexa, IBM’s Watson and autonomous vehicles.

Strong AI is a combination of artificial general intelligence (AGI) and artificial superintelligence (ASI). Artificial general intelligence, or AGI, is a theoretical type of AI that would have human-level intelligence.

It would have a self-aware consciousness that is capable of problem solving, learning, and planning for the future.

Artificial superintelligence, or ASI, also known as «superintelligence,» would surpass the intelligence and capability of humans.

The truth is that only a few examples of ASI can exist today because strong AI is still purely theoretical. But while we wait for it to develop, the best examples we can think of are taken from science fiction books and movies such as HAL in «Damlicron Odyssey».

Machine learning versus deep learning

Deep learning is a subset of machine learning and has the potential for extremely accurate predictive modeling.

But you may want to consider your project requirements before deciding to work with deep learning.

It is worth understanding the difference between deep learning and machine learning.

Deep learning is a subfield of machine learning, which itself falls under the umbrella of artificial intelligence.

Deep learning is becoming more popular because it allows the use of larger data sets and eliminates some of the human intervention required.

It can even «eat» unstructured, raw data such as text or images, unlabeled data sets on which deep learning relies for its algorithm.

Classical machine learning, on the other hand, requires both labeled data and structured data to work properly.

This can make it less attractive to larger companies or individuals looking to handle complicated, unstructured data sets.

Deep learning is a subset of machine learning that uses artificial neural networks.

The «deep» in a deep learning algorithm refers to any artificial neural network with more than three layers, including input and output layers. This is generally represented using the following diagram:

AI has improved greatly in recent years. The rise of deep learning has been one of the most significant advances because it reduces the amount of time and effort needed to build AI systems.

A big reason for this is that deep learning technologies are enabled by big data and cloud architectures so you can access large amounts of data and processing power to train AI solutions.

AI Applications

There are many real-world applications of AI systems today. Below are some of the most common applications:

1. Detect Fraud

Banks can use machine learning to identify suspicious transactions. Supervised learning is used to train a model with data on known fraudulent transactions.

Anomaly detection helps identify transactions that appear atypical and may warrant further investigation.

2. Customer service

Online chatbots are replacing human customer service agents throughout the customer journey, changing the way we communicate on websites and social media platforms.

Chatbots answer frequently asked questions about topics such as shipping or help customers with personalized advice or cross-sell products.

Examples include virtual agents on e-commerce sites; messaging bots using Slack and Facebook Messenger; and tasks often performed by virtual assistants and voice assistants.

3. Speech recognition

This capability allows a computer to translate human speech into written text. It is also known as automatic speech recognition (ASR).

Computer speech recognition has been used in Facebook Messenger’s virtual assistant, for voice search on mobile devices, and for device accessibility.

4. Automated stock trading

As scary as high-frequency trading sounds, it’s great for the average investor. These automated platforms can trade thousands of shares in a single day with little or no human involvement.

5. Automated stock trading, computer vision

This artificial intelligence technology allows computers to derive meaning from digital images, videos or other visual inputs.

There are applications for this technology in photo tagging in social media and radiology images in healthcare systems.

6. Recommendation engines

These AI-driven algorithms can discover data trends, which can help create more effective cross-selling strategies.

Online retailers use this approach to make truly relevant product recommendations during the checkout process.

Artificial intelligence timeline: key dates and names

Below, we compile the most notable events in the evolution of AI.

1950: Alan Turing publishes Computing Machinery and Intelligence. In this article, Turing, famous for cracking the Nazis’ Enigma code during World War II, proposes the question «can machines think?» and introduces the Turing test to determine whether a computer can demonstrate the same intelligence (or at least perform the same function) as a human being.

Since then, this value of the Turing test has been the subject of debate.

1956: John McCarthy coins the term ‘artificial intelligence’ at the first AI Conference at Dartmouth College. He would later invent Lisp, one of the first programming languages.

Later that year, Allen Newell, JC Shaw and Herbert Simon create Logic Theorist; it is the first AI software program to run based on neural networks!

1967: Frank Rosenblatt builds the Mark 1 Perceptron, inspired by neuroscience studies that began in 1943 with Polish physiologist Jerzy Konakowski.

In just one year, the same year that Marvin Minsky and Seymour Papert published Perceptrons, Marvin Minsky and Seymour Papert published Perceptrons, which becomes a landmark work on neural networks and satirizes future research efforts on them.»

1973: General Artificial Intelligence is still in its early stages, but companies are now beginning to adopt it widely as a way to solve specific challenges.

Gartner estimates that 50% of enterprises will have platforms to operationalize AI by 2025 (up from 10% in 2020).

Knowledge graphs are an emerging AI technology that can enable add-on sales strategies, recommendation engines and personalized medicine.

Advanced NLP applications are also expected in the future, which will facilitate interaction with machines.