Artificial intelligence is rapidly becoming embedded in American society, and many experts say it will be as life-changing as the internet, revolutionizing our economy and everyday lives. But there are huge concerns, and our guest today will address some of them. Stay with us.

Coming to us today from Highland Park, IL is Michael Pickard, a prolific author who’s had a successful career in information technology and we’re going to lean on that expertise for today’s discussion.

Michael is inspired by ideas. What is, what isn’t, and what could be. Back in 1993, he started writing fiction when his daughter Samantha asked him to write daily letters to her at overnight camp. So, he mailed her chapters of a story she could relate to: an alien who came to Earth and attended overnight camp. Specifically, her camp.

Those letters continued every summer at her request. After 5 years’, he’d accumulated enough material for his first novel. And then, he couldn’t stop writing.You can visit his catalog at http://www.gerfnit.com. You’ll find nine novels (paperback and ebook), a collection of short stories, and a children’s book. We’ll get him to fill us in on all of that later, but first our focus is AI, which Michael calls “Machine Learning Systems.”

A new Stanford University report makes this observation:“As the technical barrier to entry for creating and deploying generative A.I. systems has lowered dramatically, the ethical issues around A.I. have become more apparent ….”And, noted Goldman Sachs last month, if generative A.I. lives up to its potential, up to 300 million jobs could be at risk in the U.S. and Europe, with legal and administrative professions the most exposed.

So, clearly, there are many concerns – as there should be.Michael, welcome to the Lean to the Left podcast.

Q. Before we get to your books, let’s dive right into your concerns about AI. But first, what expertise do you have in this field?

Q. Why do you refuse to use the term “artificial intelligence?”

Q. What are your concerns about how it might be used, and what dangers do you forsee?

Q. In a note to me, you said “People who build language models for these systems ignore aspects of ethics. Garbage in, hatred out.” You said there is a “fatal flaw in the current techniques underlying machine learning and no one in the industry is stepping up to solve it…or even talk about.” Please explain.

Q. What should be done to guard against these types of abuses?

Q. Many companies already are racing to use “machine learning” and the Stanford survey said this:

“At its current developmental speed, research is moving on from generative A.I. to creating artificial general intelligence, according to 57% of researchers surveyed by Stanford. Artificial general intelligence, or AGI, is an A.I. system that can accurately mimic or even outperform the capabilities of a human brain.”

That is no small statement, which begs this question:

Should there be government regulations developed to protect against abusive, perhaps even criminal, behavior with this technology?

Q. Please tell us about your books, maybe focusing on your latest since you have so many!

Q. Planning on a book focused on the use of “Machine Learning?”

Q. How can people find your books and how can they reach out to you?

This show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/4719048/advertisement

Show Notes

Don’t forget to follow Lean to the Left at podcast.leantotheleft.net, and you can reach me at bob@leantotheleft.net. You can also follow us on social media…Facebook at The Lean to the Left Podcast. Twitter at LeantotheLeft1. YouTube at Lean to the Left, Instagram at BobGatty_leantotheleft, and TikTok at Lean to the Left.

If you would take a minute to give us a review, that would be great. There are lots of podcast links on our webpage, podcast.leantotheleft.net, where you’ll also find our upcoming interview schedule and links to all of our podcasts.

I hope you’ll come back on a regular basis and check out our interviews with guests on topics that I hope you find interesting, entertaining, and enlightening. 

Our interview shows stream weekly on Mondays, and depending on what’s going on, also on Wednesdays, and most are produced as videos available on the Lean to the Left YouTube channel.

Also, let your friends know about this podcast and take a minute to subscribe yourself. Just go to podcast.leantotheleft.net to subscribe, check out the upcoming interview schedule, and listen to all of our episodes. 

Remember, our goal is to be informative and entertaining as we comment on the latest developments in the news…you guessed it…with just a little lean to the left.

Show Transcript

Concerns About Artificial Intelligence

[00:00:00]

[00:00:00] Bob Gatty: Artificial intelligence is rapidly becoming embedded in American society, and many experts say will be as life-changing as the internet, revolutionizing our economy and even our everyday lives. But there are huge concerns, and our guest today will address some of them. So stay with us. 

[00:00:21] Coming to us today from Highland Park, Illinois is Michael Picard, a prolific author who's had a successful career in information technology, and we're going to lean on that expertise for today's discussion.

[00:00:35] Michael's inspired by ideas what isn't, what could be. Back in 1993, he started writing fiction when his daughter Samantha, asked him to write daily letters to her at overnight camp. So he mailed her chapters of a story. She could relate to --an alien who came to Earth and attended overnight camp.

[00:00:59] Guess whose camp? Her camp. Those letters continued every summer at her request after five years. In fact, he'd accumulated enough material for his first novel and then he couldn't stop writing. You can visit his catalog and it's a catalog cause there's a lot of stuff there. At www G E R F, like Frank n i t.com, you'll find nine novels, paperback and ebook, a collection of short stories and even a children's book.

[00:01:39] We will get him to fill us in on all of that later. But first we're gonna talk about AI, which Mike will calls machine learning systems, says he hates the term AI. A new Stanford University report makes this observation: as the technical barrier to entry for creating and deploying generative AI systems as lowered dramatically, the ethical issues around AI have become more apparent. The ethical issues, and noted Goldman Sachs last month, if generative AI lives up to its potential, up to 300 million jobs could be at risk in the US and Europe with legal and administrative professions the most exposed.

[00:02:32] So clearly there are many concerns as there should be. Michael, welcome to the Lead to the Left Podcast. 

[00:02:41] Michael Pickard: Good to be with you. 

[00:02:43] Bob Gatty: Hey, man. Before we get to your books, let's dive right into this, to your concerns about AI. First, tell us a little bit about your background and your expertise in technology.

[00:02:56] Michael Pickard: I have been doing software professionally since 1971 when I graduated from Northwestern University with a bachelor's in mathematics and an unofficial minor in computer science because back in the stone age Northwestern didn't have a computer science degree. But I took every computer science course there was, and I even invented some that the, TAs that I worked alongside of at the computer center would teach.

[00:03:25] So I got a good broad foundation for what goes on inside a computer in terms of software and what the limits were. 

[00:03:36] Bob Gatty: Excellent. Now, why is it that you refer, let's just start with that. Why do you refuse to use the term artificial intelligence? 

[00:03:45] Michael Pickard: So the term has been misused, so badly. That if you ask someone what it is, a definitive definition, no one can give you one. And I really don't like amorphous phrases that don't carry some meaning. I, think the English language should be more precise so that we can communicate. And if I use a phrase, it can mean anything.

[00:04:15] I'm not really communicating with you.

[00:04:17] Are these systems artificial, certainly because it's software running on a, piece of hardware, computer, typically. But I, balk at the second word of the phrase intelligence. Cause I, I don't believe that these systems are intelligent at all. 

[00:04:34] Bob Gatty: All right. So what are your concerns about how it might be used and what dangers do you see?

[00:04:43] Michael Pickard: I, tripped over this topic when I heard a news report that the facial recognition systems that law enforcement was using, Were not recognizing African-American faces. They were getting false hits on African-American faces, but Caucasian faces were being processed very well, and I thought it was an interesting news report.

[00:05:18] So I dug into it and I learned that the catalog of faces that trained the system, Did not include sufficient ethnicities so that those faces might be recognized. And it was clear to me that it may have been unintentional but it was still inserting a bias an unfortunate and unnecessary bias in a law enforcement tool.

[00:05:49] And so I, appreciated the, identification of this flaw so much. I included it in one of my novels. A detective story where they, can't recognize my protagonist partner who is African American. I've, always had even before I went to college, An interest in technology and I follow technology as a hobby.

[00:06:12] So keeping up with how machines simulate intelligence has always been something on my radar. 

[00:06:22] Bob Gatty: I see. Okay. Now, in a note to me, you said people who build language models for these systems ignore aspects of ethics. Garbage in hatred out. Now you said there was a fatal flaw in the current techniques underlying machine learning, and no one in the industry is stepping up to solve it or even talk about it.

[00:06:46] Can you explain what you're referring to there? 

[00:06:49] Michael Pickard: I'm not gonna make this a tutorial on G P T. Systems or how they work. Only to say that there are two phases. There is a phase where information is fed into a computer system and the system is taught based on that information. And so that, that's how it makes relationships either between pictures or words or music.

[00:07:21] That's how the system will later operate to respond to a human. And the second part is the part that's getting all the press, which is being able to type in a prompt or a question and have the, computer software the, machine learning model give a response that the human processes as intelligent.

[00:07:49] The problem is in my view, that the information that's being loaded into these systems that are training the systems are just raw data. So it's news articles and websites and who knows what else, and there doesn't seem to be any kind of quality step in the process of training the system in the first place.

[00:08:17] So if you include things in what is training this software that can eventually generate nasty stuff on, the outbound side That's why some of these chat systems that were brought online and were fed from Facebook as they're learning turned immediately hostile, nasty and biased because they, were fed with garbage.

[00:08:47] So I, I think the and, in my note to you, I said that no one is paying attention. That was a little too specific. There, there are people who, are dancing around the, periphery of dealing with this. So it's not completely being ignored, but I think some of their efforts are misguided.

[00:09:09] And I can talk about that later. 

[00:09:12] Bob Gatty: What do you think needs to be done ? 

[00:09:13] Michael Pickard: Computers are really good for repetitive tasks. And I think one of the things that should be done is to appropriately test the information that is being loaded to make sure that there aren't accidentally Biased and and, other bad stuff being loaded to, to train the machine.

[00:09:45] But I think that in the testing of these systems they ought to push on the testing to see if they can uncover things that they may have missed. And I'm not sure that anybody is testing these things. They, loaded up the software learns. And then they turn it loose. And all of us users of that software are the testers.

[00:10:14] I think that's wrong. 

[00:10:16] Bob Gatty: Yeah. Have you tried chat G B t yet yourself? 

[00:10:23] Michael Pickard: I have nominally not for my writing. I refuse to do that.

[00:10:27] Dabbled with it, right? Yeah. But I've, learned I, don't have to stick my hand in a fire to know that I'm gonna burn myself. Other people have gotten burned and I'll learn from them. 

[00:10:40] Bob Gatty: Ok. Yeah. Many companies are already racing to use this quote, machine learning in the Stanford survey said this .

[00:10:52] At its current developmental speed research is moving on from generative AI to creating artificial general intelligence. According to 57% of researchers surveyed by Stanford, Artificial general intelligence or AGI, is an AI system that can accurately mimic or even outperform the capabilities of a human brain.

[00:11:21] Now, that is no small statement, and it begs this question should there be government regulations developed to protect against abusive, perhaps even criminal behavior with this technology? 

[00:11:36] Michael Pickard: I'm not sure that inviting the government in is, gonna yield the result you want. Okay. Because of the way government works, uhhuh, I'm a bit of a skeptic.

[00:11:49] When I used to work at a corporate office and I went to one of the branch offices and said, I'm here from corporate to help you. People would laugh out loud. And, say, why don't you substitute the word government? I think that there is a false promise here. I'm, not gonna bore your listeners with how G PT systems work on the inside, I will mention a buzz phrase and they can look it up.

[00:12:15] Neural networks. So, they can spend an afternoon reading about neural networks. Okay. But that is what is created when you take all of this raw material and go through a learning process. The idea of a neural network is supposed to mimic or simulate, remember that word simulate how a human brain works.

[00:12:40] And so the thought for this particular technology is and, the hope in, that quote is that if we have sufficiently robust neural networks that we train from a huge variety of sources, and now they're into the tens or hundreds of billions of source documents that are being loaded to train systems that these neural networks will process this information, learn something.

[00:13:19] And then be able to simulate intelligence to human beings. The flaw in, in that promise is that the intelligence doesn't come from the software. If you do a chat G P T session and it generates some prose, and you look at it and you, say, oh that's pretty good writing. That's a pretty good answer.

[00:13:45] It's even helpful. The intelligence in that, in dealing with that statement is coming from the human user who's viewing it as something useful, but the machine that generated it had no intelligence, not a bit. Because all of these systems using statistics and huge number crunching, associate all those words and phrases.

[00:14:13] And I, wanna quote from the article I saw this morning from the founder of Mathematica. It says that what G p t always fundamentally tries to do is reproduce a reasonable continuation. Of whatever text it's got so far. So it starts out with some words and then it figures out what is a likely next word and then a likely next word.

[00:14:43] And it strings them together and generates that as an answer. That's a mathematical statistical process, that a computer can do real well. It is not intelligence. 

[00:14:59] Bob Gatty: I see. Okay. Now

[00:15:02] Is, are there any other points about this topic that you'd like to make before we switch over and, focus a little bit about your books and what you've been doing? 

[00:15:14] Michael Pickard: Sure. I came across an article and I, found that using Bing Search, which, which has some G P T in it. Okay. Harvard University has an initiative called Embedded Ethics.

[00:15:31] And they're including philosophical concepts in their computer science courses so that students Don't only ask can I build it, but rather should I build it? And if so, how? So that's putting ethics into the education of the next generation of software developers. And I think that's a good thing.

[00:15:56] I'm belong to something called lunch chat 45 minute sessions that are arranged based on your interest. And so I've met people all over the world through lunch chat. And because I have machine learning as one of my topics, I get to meet a lot of people who are doing stuff with machine learning.

[00:16:17] And I ask them the following obnoxious question how are you embedding ethics in the machine learning systems that you are deploying today? And when I ask that question, there is a long silence and. They talk around the edges of the question I asked, but none of them to date, not even one has made any suggestions about how they could include ethical considerations or moral considerations in the systems that they are currently deploying.

[00:16:52] And that's what they're doing. They're rolling this stuff out. So if the people who are rolling it out don't know and are caught flatfooted about ethics and the systems they're rolling out. The more I got that answer, the worse I felt about the impact of these systems on the human population. 

[00:17:17] Bob Gatty: I don't know if driverless cars and driverless tractor trailer trucks are enabled through some form of AI are they?

[00:17:30] Michael Pickard: Yes. 

[00:17:32] Bob Gatty: That scares a living. You know what, out of me man? Seeing an 18 wheeler coming down the road with nobody behind the wheel. How about you? 

[00:17:39] Michael Pickard: I, guess as someone who wants the planet to be viable for my daughter and grandchildren I, should be looking at electric cars.

[00:17:50] But I don't want to turn over any aspect of driving to a machine algorithm. I'm just unwilling to do it. I've been around software too long. I've written major systems. I thought they were bug free. We had tested them extensively. We deployed them and then we started to get bug reports. So there was never anything that you could call perfect software.

[00:18:17] There will be, flaws always. You can't beat all the bugs outta software. And in this process, you're so far from any algorithm that you could debug. You have this engine that takes all this input and creates a neural net. And then the neural net response, if there's a bad response.

[00:18:42] I, I read that one of the G P t instances said that one plus two did not equal three. Okay? Okay. Yeah. So if you bought a calculator and you tested it out and you did one plus two equals three, and You said one plus two and you did not get the answer three. You would take the calculator back to the manufacturer and say, this is flawed.

[00:19:09] It's not giving me the right answer. In machine learning systems, if you get some sort of answer that is Inappropriate or just plain wrong. And there have been some of those too. How do you debug those there? There is no entree to the algorithms that generated the answer. When I talked to these machine learning folks and I asked my second obnoxious question, which is when you get an answer, can you ask the system?

[00:19:46] How did you come up with that answer? And, their response is, all it would do is just generate a bunch of tables of statistics. But, it can't tell you how it came up with the answer. It can just show you the, inner workings of the stats and, neural network. So debunking these things is problematic.

[00:20:12] Bob Gatty: So you're saying that electric cars have. Some form of AI built within them, and that worries you? 

[00:20:21] Michael Pickard: Absolutely. 

[00:20:23] Bob Gatty: What I, don't understand what it is that AI is related to with an electric car, if I don't know it. Probably a lot of people listening don't know it either. So what is it?

[00:20:39] Michael Pickard: I think it's simply that the software in a self-driving car needs to look at the environment and make decisions. Yeah. Based on what it sees. So yesterday morning I was gonna make a left hand turn. Yeah. The light was turning yellow. I was out in the middle of the intersection coming toward me was an S U V.

[00:21:05] Okay. I had to make the decision whether I thought the SUV was gonna stop or go through the intersection. And if he was gonna stop, I could go ahead and make my left turn, if he was gonna barrel through, making the left turn would've caused my car to be broadsided, right? So given how fast he was moving and thinking about the risk to my life, I decided to wait, and he barreled through the intersection on the yellow, it turned red as he was passing through. And on a red light in the middle of the intersection I, completed my left hand turn. I can't tell you if the subtle nature of that situation would've been appropriately diagnosed by a piece of software running under the hood.

[00:21:59] Bob Gatty: Yeah. I understand that's driverless cars, but I was just referring to regular electric vehicles. Is there a concern there too? 

[00:22:09] Michael Pickard: Much less because the human is in control. 

[00:22:16] Bob Gatty: Yeah you're, really, what is the only time I see that, that would come into effect is if you're trying to use one of the, one of the features that will do a parallel park for you or something like that.

[00:22:31] Michael Pickard: Yeah. I, have a relatively modern car that I call a computer on wheels. Yeah. Because everything is through touchscreen and It warns me when it thinks I'm doing something wrong, even if I'm not doing something wrong, right? And on occasion with me behind the wheel, so I'm in charge, it will throw up a big red Not symbol and apply the brakes, and there's nothing in front of me.

[00:23:01] Really. It's, reacting to preventing me from crashing uhhuh, but crashing into nothing. So I know the software's buggy i, experienced the results of it. Yeah. And that's with me in control. So what kinda car is that? It is a modern Volkswagen golf, 

[00:23:22] Bob Gatty: A modern Volkswagen golf.

[00:23:24] Okay. All righty then. Let's see. I wanna hear about your books. You've got, what, eight or nine? Whole bunch? So 

[00:23:32] Michael Pickard: the 10th one came out Last month. It was my first murder mystery novel. Yeah, you were right up, up to that point. I was doing sci-fi novels.

[00:23:44] I'm, really comfortable in the sci-fi genre. Okay. But I wanted to test my skills at a murder mystery. All right. What's the name of it? It is called Creative Deductions Home Run. 

[00:24:01] Bob Gatty: Okay. I saw the cover for that. Talk about it. 

[00:24:04] Michael Pickard: Okay. Shameless plug here, man. I appreciate it. I was thinking a walk with my wife.

[00:24:09] We lived not too far from Lake Michigan, and I said you read a lot of mysteries. What's a. What's an aspect of, mystery stories you enjoy? And she said when there are two murders and they seem to be separate, but the detective figures out that they're related. And I said, that's a pretty cool idea.

[00:24:30] Yeah. And so I, I said that as my goal. So I invented a Chicago detective Nick Chasm. Grew up in a small town near Rockford but now works for the Chicago Police Department. And he's a very flawed character, intentionally because flawed characters are interesting. And in the middle of a drug arrest, his mother calls him and says and she's 82 years old, and she says, my 80 year old male companion has been murdered.

[00:25:04] And you have to come up here and solve it. And he tries to get out of it cause he doesn't really like going home. There are memories of a bad relationship with his father that keep him from going home to see his mother, although he calls her every night and reluctantly he goes up to Eden Gorge his hometown and starts to investigate that murder.

[00:25:31] Or that circumstance and learns that the same night there was a suicide in the place where he used to work and, so he starts investigating. 

[00:25:44] Bob Gatty: Okay, that's all you're gonna tell us, huh? 

[00:25:49] Michael Pickard: Yeah I, don't wanna spoil it too bad. 

[00:25:52] Bob Gatty: That's all right. That's all right. Sounds like a great book, man. I'll tell you what, I love this kind of books and you, sent me another cover.

[00:26:01] You might as well tell me about that. What is that? 

[00:26:04] Michael Pickard: So a year earlier I released my ninth sci-fi novel called Forward and Back. The protagonist is a physicist working at a laboratory on the outskirts of Chicago. He's named after my high school physics teacher who passed away a couple years ago.

[00:26:26] And he's working on developing heavy protons to kill cancer growths. And he is working, at it for 10 years, and his funding is running out. So he's got one last experiment to do and it's scheduled unfortunately on the same, the morning after the night that his wife goes into labor for their first child.

[00:26:54] And so now he's faced with a dilemma. Does he go to do the experiment or does he stay with his wife? She lets him go to the experiment because the doctor says it'll be hours of labor. And he gets there just a moment, too late, the experiment goes awry and sends him eight years into the future.

[00:27:17] So at that point, his wife has declared him dead, or the courts have declared him dead. His son was born and is eight years old, but he is never met him. He doesn't have a job. He doesn't have any money. And so he needs to figure out what he is gonna do. Is he gonna stay eight years in the future and try to reconcile with his wife?

[00:27:37] Is he gonna try to go back and fix the problem that caused him to leap eight years forward? And there are some subversive, elements. They're trying to get him to do one thing or the other. That's forward him back. 

[00:27:52] Bob Gatty: All right. That sounds like a good one too. Now, since you have spent all this time writing about sci-fi machine learning is a bit like sci-fi.

[00:28:03] So you, go do a book about that. 

[00:28:07] Michael Pickard: There is a novel I wrote that had a, oh I bite my tongue. An artificial intelligence component. 

[00:28:18] Bob Gatty: He didn't wanna say that. 

[00:28:20] Michael Pickard: I didn't wanna say that. The, book is called off the books. It's set in China about 30 years from now. And it's based on an npr A story I heard they were talking about hundreds of thousands, maybe millions of Chinese workers building iPhones and iPads and computers.

[00:28:46] Okay? And I thought, what if a lot of my novels come from what if robots took over all their jobs? Okay. What is China gonna do with a million unemployed piece workers? Uhhuh. Those people could revolt, they gotta give 'em something to do, right? They decide to turn them into software writers and they sponsor the formation of companies to write software.

[00:29:20] Okay? So the protagonist in the story is a 20 something year old Chinese female. Because they say you write, you should write what you know. And it is her life journey from putting in screw number 13 in an iPhone to becoming a software person, to being thrown into a project that is way over her skill level and how she grows into the job.

[00:29:46] Okay. And how she becomes a self-directed person. In the middle of that, there is a, an AI component. 

[00:29:54] Bob Gatty: What's the name of that book? 

[00:29:56] Michael Pickard: That is called Off the Books.

[00:29:58] Bob Gatty: Off the books. You already said that. All right, so you're not going to start another book focused on, AI and some of these ethical issues that you're concerned about.

[00:30:12] Michael Pickard: In fact I, do have one I'm dabbling with. 

[00:30:15] Bob Gatty: Ah, see, I told you, I knew the topic was there and you're not gonna be able not to deal with it. I know that. 

[00:30:23] Michael Pickard: Yeah. When you're dealing with Genre of science fiction, you can't really stay away from it. 

[00:30:31] Bob Gatty: Oh, you got an imagination too.

[00:30:33] Obviously you've got a hell of an imagination. All right, so people find your books where? Amazon. Where else? 

[00:30:40] Michael Pickard: Yeah. They they can take a look at a contest I'm running for the current book at grifnit.blog spot.com. 

[00:30:56] Bob Gatty: How do you spell that? 

[00:30:58] Michael Pickard: G E R f N I t.blog spot, b l o g s p o t.com.

[00:31:08] I don't update it frequently, but when I have a contest, that's where I put the contest rules. And if, any of your listeners wanna write to me, cause I love getting email. Okay. They can write to me@authoratgriffnit.com. 

[00:31:25] Bob Gatty: Very good. All right, Michael, it's been great talking to you. Do you have anything else you'd like to share with us before we close it up?

[00:31:34] Michael Pickard: No I, wanted to appreciate the opportunity to have a forum to raise at least some of the issues before things get out of hand. Yeah. 

[00:31:45] Bob Gatty: Okay. We'll fire it out there and hopefully some people will pay attention. So thanks very much for, being with us Michael. Appreciate it. 

[00:31:56] Michael Pickard: My pleasure.

[00:31:58]

Comments & Upvotes