A Georgia school district plans to spend $ 16.5 million to install artificial intelligence surveillance cameras on its nearly 100 buildings in the coming years. (August 30)
Companies in nearly every industry implement artificial intelligence to simplify work for staff and tasks for consumers.
Computer software teaches customer service agents how to be more compassionateschools use machine learning to search for weapons and mass shooters on campus, and doctors use AI to map the root cause of diseases.
Sectors such as cyber security, online entertainment and retail Use technology in combination with wide ranges of customer data in revolutionary ways to optimize services.
While these applications may seem harmless, perhaps even useful, AI is as good as the information provided, which can have serious implications.
You may not realize it, but AI helps determine if you qualify for a loan in some cases. There are products in the pipeline that might have cops stopping you because the software identified you as someone else.
Imagine if people on the street could take a picture of you, then a computer scanned a database to tell you all about it, or if an airport security camera signaled your face as a thug strolled the TSA.
These are the real-world possibilities when the technology that should enhance convenience has the human bias built into the structure.
"Artificial intelligence is a super powerful tool and, like any really powerful tool, it can be used to do many things – some good and some problematic," said Eric Sydell, executive vice president of innovation. at Shaker International, which develops AI-enabled software.
"In the early stages of any new technology like this, you see many companies trying to figure out how to incorporate it into their business," Sydell said, "and some are doing it better than others."
Artificial intelligence tends to be a generic term to describe computer tasks that would normally require a human being, such as speech recognition and decision making.
Whether intentional or not, humans make judgments that can spread to the code created for AI to follow. This means that AI may contain implicit racial, gender and ideological biases, which has led to a number of federal and state regulatory efforts.
In June, Deputy Don Beyer, D-Va., Offered two amendments to a House budget bill that would prevent federal funds from covering law enforcement facial recognition technology and require the National Science Foundation to report to the Congress the social impacts of AI.
Police officers are always looking to fine who is texting while driving. But now Australia is taking things to the next level and using artificial intelligence to capture drivers guilty of this crime. Susana Victoria Perez, from Veuer, has more.
"I don't think we should ban all federal dollars from doing all the AI. We just have to do it carefully," Beyer told USA TODAY. He said computer learning and face recognition software could allow police to misidentify someone, prompting a police officer to take a gun in extreme cases.
"I think we will soon ask to ban the use of face recognition technology in body cameras because of real-time concerns," Beyer said. "When data is inaccurate, it can cause an out of control situation."
AI is used in predictive analytics, in which a computer reveals the likelihood of a person committing a crime. While not exactly the point of the "pre-crime" police units in Tom Cruise's sci-fi hit "Minority Report", the technique faced a close scrutiny of whether it improves security or simply perpetuates it. iniquities.
Americans expressed mixed support for AI applications, and the majority (82%) agreed that it should be regulated, according to a study this year from the Center for Governance of AI and the University of Oxford's Future of Humanity Institute.
When it comes specifically to facial recognition, Americans say law enforcement agencies put technology to good use.
Body cameras will not improve the police: University degrees and higher standards
Numerous studies suggest that automation will destroy jobs for humans. For example, Oxford Scholars Carl Benedikt Frey and Michael Osborne estimated that 47% of US jobs are at high risk for automation in the mid-2030s.
How do workers care about being displaced by computers, others are hired thanks to AI-enabled software.
Technology can combine employees who have the right skill sets for a specific work environment with employers who may be too busy for humans to examine candidates.
Automation can affect blue and white collar workers. (Photo: Getty Images)
Shaker International uses data collected from tests, audio interviews and resumes to predict how a person might behave at work.
"Significant bits of information" include "how a person will work, how long he will stay, be a sales executive or a high quality worker," Sydell said.
Using AI, "we can get rid of processes that don't work well or are redundant. And we can give candidates a better experience by providing real-time feedback throughout the process," Sydell said.
He said that if AI is poorly implemented, it can make the work environment worse, but if done carefully, it can lead to fairer workplaces.
Robots that steal jobs are not the problem: This is.
For better or worse, artificial intelligence affects the financial decisions people have made and made for years. It plays an increasingly significant role in the way traders invest and is particularly effective at preventing credit card fraud, experts said.
Some AI actions are great purchases. (Photo: Getty Images)
Where things get questionable is when technology is used to decide if you are worth borrowing money from a bank.
"Whenever you apply for a loan, there may be AI to find out whether or not that loan should be granted," said Kunal Verma, co-founder of AppZenn, an AI platform for client finance teams like WeWork and Amazon.
Technology is often presented as a faster and more accurate assessment of a potential borrower as it can filter out tons of data in seconds. However, there is room for error.
If the information provided in an algorithm shows that you live in an area where many people have not repaid their loans, the system may determine that you are not reliable, Verma said.
"It may also happen that the area may have many people from certain minorities or other characteristics that could lead to a bias in the algorithm," Verma said.
The problem with AI? Study says too white and masculine, asks for more women, minorities
Prejudice can arise at almost every stage of the deep learning process; However, algorithms can also help reduce disparities caused by human misjudgment.
AI can also help detect breast cancer. (Photo: Getty Images)
One type of solution involves changing sensitive attributes in a data set to compensate for the result. Another is preselecting the data to maintain accuracy. Anyway, the more data the company has, the fairer AI can be, Sydell said.
"There is a reason why Google, Facebook and Amazon are leaders in AI," Sydell said. "It's because they have a lot of data to process. Other companies have access to the same type of AI technology, but they may not have large amounts of data to use and apply. That's the stumbling block."
Beyer, the politician who wants to regulate AI, is in favor of humans mulling decisions made by computers "until technology is perfect, if ever."
He said it might be worth asking whether AI should be the ideal solution to all problems, including whether someone goes to jail.
"And when it's perfect, we have to start thinking about privacy. Is it reasonable to take a picture of someone and run it in a database?" Beyer said. "If AI can read an X-ray much faster, much more accurately and with less prejudice than a human, that is fantastic. If we give AI the ability to declare war, we are in big trouble."
Follow Dalvin Brown on Twitter: @Dalvin_Brown.
Read or share this story: https://www.usatoday.com/story/tech/2019/10/02/how-artificial-intelligence-bias-can-work-work-against-you/2417711001/