Genetic Algorithms in AI: How Genetic Algorithms Continue to Contribute to Machine Learning and Artificial Intelligence

Written by Scott Wilson

dna binary code

Genetic algorithms are old tools that are finding new life in the world of machine learning.

Before neural networks were inspired by the functions of neurons in the human mind, another kind of learning algorithm was inspired by the grander process of evolution itself. In fact, Alan Turing cited the principles of evolution as part of his vision for learning machines in the very same paper that proposed the Turing Test for artificial intelligence.

Computer scientists experimented with GA approaches through the 1970s and 1980s. They were found to be useful in various types of math and coding optimization problems. GA tools like Evolver were even available for personal computers in the 1990s.

The wild success of other approaches in ML for optimization along known gradients eclipsed the use and growth of genetic algorithms, however.

What Is a Genetic Algorithm?

genetic researchGenetic algorithms are part of a larger family of evolutionary algorithms: systems that attempt to use iterative methods of reproduction and selection to evolve machines toward more capable functions.

Most people who have had even a basic biology class can take a stab at guessing how genetic algorithms work. Like the process of splitting and recombining DNA creates mutations that give living creatures characteristics with different fitness levels for a given environment, genetic algorithms mutate and combine code for a shot at more effective solutions.

Genetic algorithms are, at heart, a competitive process.

Getting the process rolling is deceptively simple. A population of potential solutions to a particular problem domain are created. These can be generated entirely randomly or seeded with some basic capabilities designed to fit the problem.

Each of those solutions has certain characteristic properties that can be altered, called chromosomes. A fitness function is coded to evaluate their performance. Then they are turned loose against the problem to be solved.

Genetic algorithms attempt to incorporate similar strategies to the mutation and evolution of the genome of living creatures.

So far, this isn’t much different than any kind of evolutionary problem-solving approach to algorithm generation. But here’s where the genetic approach enters the game: a random selection of the best performers among the solutions are selected and combined. Both by cross-seeding traits from successful solutions and randomly mutating some of their properties, a new generation of candidate solutions are created to take on the problem.

Then the process continues, generation after generation, until the ideal form of the algorithm has been born.

Mimicking Life in Code Comes With Mixed Success

dna strand While they were cutting edge tools in machine learning for a time, genetic algorithms suffer from some limitations that made them less fit than the competition in many environments.

The repeated and randomized birth and evaluation of many generations is computationally expensive and time-consuming. Particularly when facing complex problems, GA doesn’t offer a practical solution.

It’s also difficult to evaluate GA solutions in global terms. Like any isolated population, it’s possible for the fittest individual in a genetic algorithm result to simply be better than the local competition, and not necessarily the most optimal result to the problem overall.

Genetic algorithms also don’t fare well with dynamic data sets. Any problem where the information domain may shift can result in evolutionary dead-ends.

Finally, they are unable to address problems that have strict yes/no answers. There’s nothing to evolve toward and their results may be no better than random guesses.

This drops a lot of limitations into the GA technique. Yet researchers and AI engineers keep coming back to them in certain situations where no other tools will cut it.

Genetic Algorithm Machine Learning Techniques Still Have Some Traction

Whatever their drawbacks, genetic algorithms have a kind of elegant simplicity that keeps them on the table when all other options have failed.

With a home in popular tools like MATLAB, genetic algorithms have a wide audience beyond the AI world.

Genetic algorithms have largely been eclipsed as tools for optimization by gradient descent and reinforcement learning. Even in the evolutionary algorithm space, they are falling out of favor compared to newer ideas like swarm and ant colony optimization algorithms.

But there are still specific kinds of problem domains where the competition and crossbreeding of genetic algorithms offer an edge.

In particular, genetic algorithm optimization has proven useful in particular kinds of scheduling or routing problems like the classic Traveling Salesman problem. An NP-hard problem of minimizing distance but still traversing a set of particular points, it’s a surprisingly mathematically challenging problem with real-world applications. For certain numbers of points, GA comes up with nearly optimal solutions with less effort than required for pure brute-force calculation.

Another genetic algorithm example is an X-band radio antenna created by NASA for a space mission with unique beam and polarization requirements. The bizarre looking, tiny bit of twisted metal didn’t resemble any conventionally engineered antenna. But the design created by competitive evolutionary algorithms performed as predicted—and better than human-created antennae.

Genetic algorithm optimization may also still be an attractive alternative when approaching problem domains with limited information. The current generation of deep learning algorithms all require very large amounts of data in order to train effectively for problem solving. But in very specialized areas with limited data, a GA approach may be the best available.

What Does the Future Hold in Artificial Intelligence for Genetic Algorithms?

digital transformationGenetic algorithms may also find new life as a sort of meta-approach to tuning models for other kinds of machine learning algorithms.

For example, some AI development teams use genetic algorithms for feature selection and hyperparameter tuning. Fine-tuning those parameters before beginning the training process can boost the model performance.

GA is also being applied to complex multi-agent environments. While training a single particular ML model is more efficient than using GA in many cases, the chaotic environment that results from throwing many AI systems together simultaneously results in highly complex coordinating and optimization problems. These are the kind of things that genetic algorithms excel with. They are being tested on multi-agent drone delivery issues.

Genetic algorithms may become crucial to managing the Artificial Intelligence Internet of Things, where many different AI systems will need to connect, communicate, and cooperate.

The reverse is also proving true. The creation of Large Language Models (LLMs) may be about to give genetic algorithms a shot in the arm. At least one research group found positive results by using LLMs to generate code suggestions to nudge GA mutations toward more optimal solutions.

Neural networks use semi-randomized approaches to explore the problem domains they encounter as well. But they are more straightforwardly iterative, moving in a particular direction toward optimization.

How to Learn About Genetic Algorithm Applications and Uses in Machine Learning

young female student in classGenetic algorithms aren’t the focus of many studies today in artificial intelligence and machine learning. The limited applications of the technique compete with newer and more promising neural network and logic system when it comes to class time.

But competition are what genetic algorithms are all about, so there is still room for them in most AI and ML degree programs, if only in passing. Genetic algorithm Python examples will be part of most AI textbooks, and even undergrad conceptual AI courses may have a lecture or two devoted to them or their overall family of evolutionary algorithms.

That means a basic program like a Bachelor of Science in Machine Learning will generally give you the tools you need to build genetic algorithms in MATLAB or Python. It’s a broad approach to ML education that will also give you the context needed to understand how GA, and other evolutionary algorithms, fit into larger schools of thought in machine learning.

Equally importantly, they will round out your essential math, statistics, and coding skills so you can put them all together.

Genetic algorithms are both niche and old school.

Unlike more popular and mainstream ML methods and techniques, you’re not likely to find a lot about genetic algorithm programming in a typical ML educational certificate program. With a short, focused course of study, those types of programs tend to focus on both core ML principles and practices and the latest developments in the field.

AI Is Evolving All the Time, and Genetic Algorithms May Prove Their Worth in the Future

robotics teacher talking with studentsWhile genetic algorithms may seem to have very limited uses in artificial intelligence today, it’s a field where researchers should never say never. Understanding the full range of tools available is key when coming up against complex challenges of logic and problem-solving.

Time and time again, methods that were once discarded as dead-ends or ineffective have been resurrected through new approaches or new technical capabilities. Just like artificial neural networks themselves were once considered all but dead after Minsky and Papert seemed to prove they could not solve certain classes of common logic problems, researchers keep discovering ways to bring back certain ideas.

Genetic algorithms may be one of those ideas that is just waiting for a bright new PhD student to break throu