Editor's Note: Denver7 360 stories explore multiple sides of the topics that matter most to Coloradans, bringing in different perspectives so you can make up your own mind about the issues. To comment on this or other 360 stories, email us at 360@TheDenverChannel.com. This story will air on Denver7 News 10 p.m. on Tuesday. See more 360 stories here.
DENVER — From social media to traffic controlling technology, artificial intelligence is starting to become a larger factor in people’s day-to-day lives.
Tech companies are not only researching its possibilities, they’re working quickly to develop mechanisms to use it. Entire countries are investing hundreds of millions of dollars to be on the forefront of this technology – racing to be first.
Artificial intelligence, also known as A.I., offers countless possibilities to change entire industries – from trucking, to criminal justice and health care.
However, while many are excited about its possibilities, others warn about the consequences of not developing responsibly.
So, what does the future of artificial intelligence look like?
What is artificial intelligence?
There is a bit of a misconception around what artificial intelligence is and how it works. A.I. is not a robot or a machine.
Instead, it is an algorithm that essentially acts as the brain for the machinery by quickly sifting through large amounts of data to determine a course of action.
Artificial intelligence itself is not new. But computers have only recently started to have the processing power to be able to handle all of the data A.I. needs to work effectively. Those computers are relatively inexpensive, opening the door of possibility to integrate A.I into more companies.
Beyond that, mass data collection has started to provide the database the A.I. needs to be able to come to learn from.
A.I. has the potential to change everything from transit to the criminal justice system.
Colorado’s commitment
The state of Colorado is hoping to lead the way on artificial intelligence. Seven months ago, U.S. Sen. Michael Bennet, D-Colo., created an artificial intelligence strategy group to take a closer look at how A.I. is being used in the state and how that will change in the future.
J.B. Holston is the chair of that group. He also serves as the dean of the Daniel Felix Ritchie School of Engineering and Computer Science at the University of Denver.
The group itself is comprised of representatives from local companies using A.I., professors and other tech experts. The goal of the group is to try to identify issues are within A.I. and to come up with proposals for how to solve them.
“Almost every major company now thinks that automation driven by A.I. is going to fundamentally change what they do and how they do it,” Holston said.
One of the big questions is how the technology will affect jobs. Holston believe more automation will lead to changes in jobs but that it will also benefit consumers.
For example, A.I. might be able to be used in law in the future so that people won’t have to pay an attorney upwards of $250 per hour to search through case law and find out what applies to their situation.
It could also change things like accounting, the truck driving industry and more.
“People are really concerned about what are the jobs of the future, what does those look like and how do we train people and get them ready for what’s that’s going to be,” Holston said. “That’s why we formed the group – because we think we have to answer those [questions] now.”
Holston believes the end result will be the labor force spending less time doing task-oriented things and having more free time. However, it could also cause some jobs to be eliminated so without an action plan, Holston says there could be consequences.
On a federal level, the government is beginning to dedicate more money to artificial intelligence research. This month, the U.S. Secretary of Energy Rick Perry announced the creation of the Artificial Intelligence and Technology Office under the Department of Energy.
However, Holston says China’s investment into A.I. development dwarfs the U.S. by about 100 to one.
“We are in a competitive environment where, arguably, our biggest geo-political competitor has made a huge commitment to win in and around A.I. and we are not keeping up,” Holston said.
That’s part of the reason Colorado isn’t waiting for the federal government to figure out how A.I. will work in the future. It is one of only a handful of states that have created a strategy group to take a closer look into the future of A.I. and Colorado’s role in it.
“What can we do with Colorado to stand out and stand up and be differentiated and make sure we are at least doing all the things we need to do?” Holston said.
One of the areas the state would like to focus on in particular in order to lead is how artificial intelligence can be used to promote sustainability.
Many Colorado companies are already starting to explore the possibility of integrating artificial intelligence into their workflows.
“It’s the wild, wild West in that everyone thinks they have to be doing something and moving forward in some fashion,” Holston said. “There’s lots of stuff going on but not everything is coherent or consolidated yet.”
The group is focused on three things: preparing the workforce for change, discussing what policies need to be put in place to protect things like privacy, and educating the community about what A.I. is and what it offers.
Holston believes the fears over the future of artificial intelligence are overblown. After all, he says, this isn’t the first time a technological revolution has led to major changes with digitization for instance.
“My advice for folks is don’t fear it; engage in it,” Holston said.
The group’s next meeting is in October, where they hope to start drafting recommendations for what the state can do.
Privacy and transparency in the age of mass data collection
Amie Stepanovitch is worried about the consequences of developing the technology so quickly without creating guardrails to regulate it. Stepanovitch is the executive director of the Silicon Flatirons Center at the University of Colorado at Boulder’s School of Law.
First, there are privacy concerns connected with the mass collection of data.
“We have to ask about the fact that we are incentivizing massive collection of data, what that means, what type of information we’re collecting, how is it being used and do people understand that,” Stepanovitch said.
Big companies are collecting large amounts of data on their users; some are selling that data to or sharing it with other companies to use in their algorithms.
“Not only are they taking in data, they’re analyzing it. They’re trying to figure out potentially private information about people based on non-private information,” she said.
It’s one of the things Stepanovitch thinks federal legislation might need to solve.
Data bias is another potential issue that artificial intelligence poses. The criminal justice system, for instance, has started using artificial intelligence to help determine things like sentencing guidelines.
However, the data the algorithm used to recommend a sentence is based on thousands of prior cases where human biases factored in. Stephanovitch says because African-American people historically received longer, harsher sentences, the A.I. will impose harsher sentences as well since the data suggests that’s the correct course of action.
“Rather than getting rid of the bias and moving toward neutrality, what we’re actually doing is hiding and entrenching the bias within the system within the data,” she said.
It’s also difficult to figure out what the biases are or why the A.I. is making a certain decision without sifting through all of the data independently.
“There’s a lot of decisions made within that system and it’s not always clear what those decisions are, what they are relying on and what factors they are using to make the decisions,” Stepanovitch said. “What needs to happen is we need to figure out how to create more neutral data sets to be building these systems on top of.”
That’s why she believes transparency is essential so that people can understand how and why a particular decision is made.
Technology is developing faster than ever before. Stepanovitch predicts this type of development is only going to speed up in coming years so she says now is the time to figure out how to regulate it. To do that, she is encouraging people to reach out to their lawmakers to ask for protective legislation to be put in place.
“We need to be able to protect privacy, need to be able to protect freedom of expression, to have a level of security built-in and to make sure companies aren’t ignoring that security requirement,” she said.
Global responsibility
Artificial intelligence is already more widely used in businesses. Brian Baker, a managing partner with the Rebel V2 project, estimates about 50 to 60 percent of businesses around the world already use A.I. in some way.
“It being all over the place isn’t really that big of a deal. It really is more helpful now than it is the evil, if you will,” he said. “I’m optimistic. I think software is just getting started.”
Baker predicts that A.I. will end up changing some jobs dramatically but says it’s nothing we haven’t seen before and that A.I. is already having an effect on jobs.
“Politicians like to say, ‘Oh we lost all of our jobs to China,’ which is really not the case. About 90% of the jobs that we have lost for manufacturing in the United States are lost to productivity, which means technology,” he said.
Some of the transitions in the job force will take a while. Baker doesn’t think the move to completely automated trucks will happen within the next several years but says it’s up to governments to work through what that transition will look like and how it will affect the workforce.
While some states and countries work on an individual level, Baker believes the discussion of A.I. needs to happen on a broader scale, citing the paperclip dilemma as a fear some in the tech community have.
“You take a machine with some artificial intelligence in it [and] you say, ‘You’re No. 1 job is to build paperclips and we need you to build as many paperclips as you can as cheaply as you can and at the highest level of efficiency,’” Baker said. “It starts building paper clips and it can’t stop and so it has no other directive, it’s just building paperclips. It wipes out mankind, it takes all of our resources and then it goes after the sun and the galaxy and the universe building paperclips.”
The idea is from the ‘60s but Baker believes it highlights why tech companies and even countries need to be responsible about the creation and use of artificial intelligence.
“Unfortunately, China is the wild card,” Baker said.
China is investing a lot of money into developing the technology very quickly. Baker believes it only takes one country not developing responsibly to create a global dilemma.
“Without (China) onboard, we have a problem with a general artificial intelligence, if it ever comes to that, because we don’t want the paperclip dilemma and right now China is not considering the paperclip dilemma,” he said. “We need to get everybody on the same page.”
Some of the biggest names in technology have expressed concerns about the rapid development of A.I.
Tesla and SpaceX creator Elon Musk has said publicly that he believes it is mankind’s biggest existential threat, warning of that if A.I. is taken too far, it could become an immortal dictator people could never escape from.
Physicist Stephen Hawking also cautioned that A.I. could mean the end of the human race and replace people altogether.
Meanwhile, former Google design ethicist Tristan Harris said A.I. is already starting to downgrade humans from our relationships to our attention spans to our sense of decency.
Baker says the fear itself is actually over artificial general intelligence, which is still a-ways out. Before it happens, though, he believes it’s important to establish rules to engrain into the technology that it must protect humans.
“This is one of those things like the energy crisis and the environmental crisis – we all have to work as a team and we can’t just have states doing this or that,” he said.
Despite these dire predictions, Baker is optimistic about the future and how the technology will change lives.
“I see a lot of great, great things happening with A.I., just in a human perspective,” he said.
Predicting the future
With the advancement of computers and the trove of data that is now available to sift through, artificial intelligence is just starting to unveil its possibilities. However, it will be up to governments and industry giants to determine how far that technology should go and whether there should be limits.
What do you think about the future of artificial intelligence? Email 360@thedenverchannel.com with your thoughts.