By: Yonatan Abrams | Business  | 

Developing a Reasonable Perspective on AI

Technological advancement is extremely exciting and tempting. For researchers, development is intellectually stimulating and mysterious. For the public, using the improved technology is convenient and pleasurable. But for the philosophers and economists, things are rarely so clear cut.

In his course “Travel, Technology, and Modernity,” YU’s Professor Douglass Burgess speaks at length about “technological overreach” — the idea that a society can implement new technology before it is fully prepared for the impact that technology would have on the culture or economy. Burgess explains that technological overreach is a failure of the scientific elite who blindly follow their passion to push boundaries in science and engineering and their materialistic desire to be the first one to earn the next million dollar patent and the respect of the public.

In the field of artificial intelligence (AI), the lack of ethical and economic oversight provides fertile ground for technological overreach, and we should be concerned about its possible repercussions. AI is the theory and development of computer algorithms that can behave in ways similar to human intelligence. AI algorithms are being used to autonomously drive cars, evaluate convicts for sentencing and parole, predictive typing and numerous other massively influential areas of our lives.

In a discussion with the Wall Street Journal in April of 2018, technology policy experts Julia Powles, a researcher in law and technology at NYU School of Law and Cornell Tech, and Adam Thierer, a researcher with the Technology Policy Program at George Mason University’s Mercatus Center, expressed their concerns that private industry is calling all of the shots on issues of ethics and legality. Powles even reported the grave truth that “You’d be hard pressed to find experts [in the field of technology policy] that don’t hold a position at or find funding from the big technology [companies] … basic concerns that ought to be at the center of debate, like whether technologies ought to be explained and proven before being released in the wild, are readily dismissed.”

I am not claiming that everything is going to be horrible and that all human interaction will be lost, artificially intelligent robots will rule over us or that humans will become obsolete. Perhaps our leaders will help us smoothly guide technology into our society with only a few blips. Already, there is an organization called Partnership on AI which, according to its mission statement found on their website, seeks “to shape best practices, research, and public dialogue about AI’s benefits for people and society.” Names like Amazon, Open AI, Google, Deep Mind and more than 80 others have already joined, and in Nov. of 2018 they held their second annual all-partners meeting.

While the meeting allows for some optimism, this isn’t good news at all. According to the event summary on their page, ethics and humanity were seemingly not part of their agenda. The forum focused on “topics ranging from the challenges of designing a global multi-stakeholder organization to designing and incentivizing equitable growth models to ensure that AI technology is built by and reflective of a diverse constituency, and that its benefits are broadly shared.” Seemingly, they were only concerned with guaranteeing that AI has healthy economic growth and produces profit for the relevant companies. Granted, the forum's focus on equality amongst diverse demographics is noble, but there are more fundamental issues at stake. For example, are governments setting up the correct policies and regulations on AI developers? Are current goals in AI going to be good for general human satisfaction and happiness? Are we just setting ourselves up to be pawns to AI’s wishes? They “aim to research the ways in which we can ensure that the development of AI is used as a tool to effectively assist humans,” but why should we let industry heads choose what the goals for humanity are? Were they elected to do so? Are they some sort of elite tribe, worthy of making these choices for us?

It is time for people to stop thinking about only the positives of AI. Most currently applicable benefits of AI are obvious to the average person, because they can be generalized by imagining a really smart, really knowledgeable and really efficient person. I want to warn the deep thinking readers of this article about the pitfalls of being blindly optimistic and unreasonable. Thinking “things will be okay, just like they always are with new technology,” is an underestimation of what AI is really bringing to the table (and perhaps a misunderstanding of our present state). It is absolutely unclear what the world will look like, and the role humans will play in the world in the next half century is up in the air. Simply waiting for changes to play out is an acceptance of “the inevitable” that we cannot allow ourselves to indulge in.