OpenAI’s,Sam Altman says that regulation of AI is crucial
In my Substack posting of April 4, 2023, “Advanced AI Development Should Be Paused, ” I mentioned that there is an open letter crafted by the Future of Life Institute, requesting a pause in further AI development for at least 6 months to give law makers an opportunity to develop legislation to regulate the emerging science. Recently Sam Altman, CEO of OpenAI, developers of ChatGPT, has stated publicly before Congress, that regulation of AI is crucial. There may be some self-service veiled in his concern. It is possible that he wants to get a head start on his competitors in shaping regulations that will favor OpenAI. He may appear to limit the technology with one hand tied behind its back while still working towards artificial general intelligence. Google is understandably concerned about its search engine business becoming the prey of the Microsoft Bing Chat service. Microsoft has partnered with OpenAI. Now Google is in a race to grow their technology quickly enough to maintain their search engine dominance. The old Facebook motto, “move fast and break things,” becomes the guiding principle for AI development. We don’t have the luxury of waiting to see how it all turns out. We can’t be reactive about this issue. We need to get out in front of it and we can’t rely on the developers with their vested interests, to do what is best for the public. Senator Blumenthal had opening remarks at the Congressional hearing. He demonstrated the power of the existing AI by using an AI voice generator to duplicate his voice and ChatGPT to write a speech. It was a deep fake. He presented the speech as though he had spoken and written it. If he had not been present without speaking, no one would have known that he was not the originator of those remarks. This is a potentially serious problem. Legislation does not exist stating that the AI origins of a creation must be divulged to the public.
Historian and philosopher Yuval Harari points out that the danger of AI is that it can make decisions and create content. Unlike its sister, social media, that has managed to wreak havoc with only the ability to recommend and aggregate content based on “likes” and “views” in your history. He further explains that unregulated AI can make it difficult for democracy to survive. AI does not encourage discourse between humans. You have a question or need information; you ask the AI. You are not likely to ask another human being. If the AI is untruthful or is used to sow distrust and chaos then it will be very difficult to maintain democracy, making authoritarianism more likely.
The Godfather of AI, Geoffrey Hinton, recently quit his post at Google so that he could speak more freely about the dangers of AI. He believes that machine intelligence will eventually dwarf human intelligence and that the machines will be capable of ‘hive-mind’ due to their connectivity. Once one machine has learned something it can be known by the other machines in the network. It may not be that AI develops some kind of malice towards humanity, but it could, in the quest for efficiency, manipulate humanity so that it can better serve the requests of its many masters.
AI is reflecting our state of mind at the time we develop it. Biases have already been discovered in some of the language models and algorithms already in use that reflect our existing discrimination. Human Resources is using algorithms to sort through resumes in the name of efficiency. Amazon discovered a bias in their algorithm in favor of male hires. The algorithm trained itself to prefer male candidates above female, reflecting the culture of the developers at the time. We are often unaware of our biases and assumptions, so it is difficult to see them during the development phase of AI. Science is just beginning to seriously look at the reality and nature of consciousness. We are still debating issues like whether society benefits when citizens are guaranteed shelter, food, healthcare, and education. We still see war and violence as viable solutions to disagreements and misunderstandings. We are still debating if inequality is a problem or if it even exists. Many are in denial about the impact of our activities on the environment that sustains us. There is still confusion about which comes first, profits or people. We are not ready for the unfettered release of AI or artificial general intelligence.
Caroline Myss, a renowned speaker in the fields of human consciousness, spirituality and mysticism, health, energy medicine, and the science of medical intuition, asked the question “into whose hand do you commend your soul?” That question is extremely important now. Many of us commend our souls to technology, or the economy, or politicians, or religion, or jobs and careers. It is easy to place faith in things outside of ourselves, like AI. I don’t believe that this kind of technology alone will empower us in the way we believe it will. It has the potential to dis-empower us because it trains us to ignore our own inner wisdom and intuition. We are projecting these attributes outwards into these models and algorithms which we see as solutions instead of helpers. AI is poised to further separate us from our own humanity. While it is considered sophisticated to separate church and state, the separation of tech and humanity is not being challenged as we make AI the new god.
The damsel in distress who must be saved by a knight in shining armor, in this case AI, is a very old story. We have been slowly moving away from this kind of storytelling. The Me-Too movement is an example. Now the damsel stands up for herself. In the book, Winning the Story Wars, by Jonah Sachs, the author explores the power of storytelling and the hero’s journey as applied to the world of branding and advertising. Our movies and television shows, literature, and theater are reflections of the power of “stories,” and we do learn from them. The good ones reflect our own inner wisdom and knowledge that is developed on the hero’s journey. These are the stories that become classic reference points for the best of humanity.
AI is part of a new story we are creating. We are still writing it and deciding who the hero is going to be. Are we going to carefully guide the development of AI to benefit humanity or are we going to create the story where we let go of the reins in the name of competition and profit, to be the first in the race to the proverbial bottom. Scientists who have been involved in developing these tools are sounding the alarm. We must be conscious of every step we take on this path. If we are going to live in a world where AI can generate deep fakes, manipulate, and misinform us, we must do everything we can to develop it safely and humanely.