Artificial intelligence is rapidly becoming embedded in American society, and many experts say it will be as life-changing as the internet, revolutionizing our economy and everyday lives. But there are reason to be concerned about AI, and our podcast guest addresses some of them. Coming to us from Highland Park, IL is Michael Pickard, a prolific author who’s had a successful career in information technology and we lean on that expertise for that discussion. Michael is inspired by ideas. What is, what isn’t, and what could be. Back in 1993, he started writing fiction when his daughter Samantha asked him to write daily letters to her at overnight camp. So, he mailed her chapters of a story she could relate to: an alien who came to Earth and attended overnight camp. Specifically, her camp. Those letters continued every summer at her request. After 5 years’, he’d accumulated enough material for his first novel. And then, he couldn’t stop writing. You can visit his catalog at http://www.gerfnit.com. You’ll find nine novels (paperback and ebook), a collection of short stories, and a children’s book. He fills us in on that later in the interview, but our initial focus is AI, which Michael calls “Machine Learning Systems.” A new Stanford University report makes this observation: “As the technical barrier to entry for creating and deploying generative A.I. systems has lowered dramatically, the ethical issues around A.I. have become more apparent ….” And, noted Goldman Sachs last month, if generative A.I. lives up to its potential, up to 300 million jobs could be at risk in the U.S. and Europe, with legal and administrative professions the most exposed. So, clearly, there are many concerns – as there should be. Here are questions we discussed with Michael: Q. Before we get to your books, let’s dive right into your concerns about AI. But first, what expertise do you have in this field? Q. Why do you refuse to use the term “artificial intelligence?” Q. What are your concerns about how it might be used, and what dangers do you forsee? Q. In a note to me, you said “People who build language models for these systems ignore aspects of ethics. Garbage in, hatred out.” You said there is a “fatal flaw in the current techniques underlying machine learning and no one in the industry is stepping up to solve it…or even talk about.” Please explain. Q. What should be done to guard against these types of abuses? Q. Many companies already are racing to use “machine learning” and the Stanford survey said this: “At its current developmental speed, research is moving on from generative A.I. to creating artificial general intelligence, according to 57% of researchers surveyed by Stanford. Artificial general intelligence, or AGI, is an A.I. system that can accurately mimic or even outperform the capabilities of a human brain.” That is no small statement, which begs this question: Should there be government regulations developed to protect against abusive, perhaps even criminal, behavior with this technology? Q. Please tell us about your books, maybe focusing on your latest since you have so many! Q. Planning on a book focused on the use of “Machine Learning?” Q. How can people find your books and how can they reach out to you?

Comments & Upvotes