Artificial Intelligence, also known as AI, is the idea that human thought processes can be replicated and mechanized. To some people, AI is a scary thought that creates images of the Terminator, Bladerunner replicants, or HAL, the creepy computer in the movie 2001: A Space Odyssey. But fear not! In the real world, AI is a set of tools to help analyze data, eliminate errors, and enhance human decision-making.
History of AI
The idea of machines being able to “think” came from early science fiction stories written in WWII. In 1949, Victoria University of Manchester, England, created the first computer that ran a stored program. Containing over 4,000 vacuum tubes and consuming 25kw of power, the Manchester Mark 1 worked on mathematics problems using code written by computer scientist Alan Turing, considered the father of AI. By 1955, the first basic AI program, Logic Theorist, had been written. It introduced the concepts of the search tree, heuristics, and list processing—the foundations of AI research.
Navigating ahead roughly 30 years, Carnegie Mellon University’s Navlab created the first self-driving car. By 1997, IBM’s Deep Blue program defeated world champion Gary Kasparov in chess, and that same year, Dragon Systems unveiled the first publicly available speech recognition software. Since then, interest and development in AI have accelerated rapidly.
Everyday Applications of AI
You might not realize it, but AI is part of many tools and applications that improve our lives. Let’s look at a few examples.
Have you ever asked Alexa to play a song while you are out walking? Is Siri your best bet for finding recipes? Digital assistants adapt to our responses and provide suggestions or complete basic data entry for us, saving time and quickly providing needed information.
Many video-streaming services, including Netflix and Amazon Prime, gather data on your viewing habits to recommend TV series you might enjoy. With tens of thousands of shows online, the choice can be overwhelming without some way to match your likes and dislikes.
Banks and other financial services providers use AI to analyze data from past transactions and flag those that seem unusual and suspicious. As AI learns your buying patterns, it gets better at detecting potential fraud. And as scammers get more savvy, these AI tools are super important to our financial well-being.
Used by online retailers, utility companies, healthcare organizations, law enforcement, and more, chatbots learn from your answers and statements to provide the information you need or route you to the correct department and person.
AI is growing rapidly, and we already take for granted how it makes our lives easier. Already, 37% of businesses and organizations use AI, and nine out of ten leading companies are investing in it.
AI in Automation
Machine Learning uses sample data to make predictions on decisions based on real data. Neural networks mimic the behavior of the human brain to recognize patterns.
Intelligent Document Processing uses Machine Learning to scan, read, extract, categorize, and organize information into meaningful formats.
BPA refers to the automation of business processes normally done by people. Chatbots are an example.
Remember those digital workers? These are also known as robots or robotic process automation (RPA). RPA mimics human actions, such as mouse clicks, copying or transferring files, etc., to perform business process activities at a high volume. They often use AI to mimic simple decision-making.
Ethics and Responsibility
The old saying goes, “With great (computing) power comes great responsibility. AI has significantly improved our lives, but handing over previously human-powered analysis and decision-making to machines bring some serious ethical concerns, with privacy, bias, and discrimination at the forefront.
In general, encryption keeps data secure, but privacy isn’t just security. Other issues include:
Unintended Bias and Discrimination
As we have noted, AI can adjust and self-learn, but if the data is flawed, it can “learn” things that are not true and make decisions that unintentionally harm it. For example:
- When facial recognition software used in law enforcement is trained on datasets containing a more significant number of white male faces, it becomes better at recognizing white males. If a suspect is a woman or is non-white, more false positive IDs occur.
- An AI algorithm often analyzes job candidates’ prior roles and salaries on hiring and compensation, so any previously experienced biases based on sex, race, or other factors become amplified.
Some Solutions to These Concerns
The U.S. AI Initiative encourages AI research and development, creates awareness of common problems, and looks to develop privacy enforcement methods and remedies for those affected by AI-based unfair decisions or discrimination.
The Federal Trade Commission guides AI transparency, explainability, fair lending decisions, and the use of robust and empirically sound data and models.
As in all government activities, transparency builds trust in the public. All consumers and citizens need to be aware of data collection and the use of automated tools by an agency.
Jason Furman, a professor of economic policy at the Kennedy School at Harvard University, believes that AI regulation should be the responsibility of industry-specific panels, as they will be far more knowledgeable about the use of AI within the full suite of technology tools.
AI is a powerful technology when implemented and used correctly, with the guiding hands and unique insights, intuitions, and involvement of human beings serving as checks and balances.