I don’t mind having more of the bed to be on when Dawn is gone, and I’ve stopped feeling guilty about it. I spread out like a starfish and sink into a deep sleep. But when the clocks toll downstairs I wake and my mind gets to thinking.
I really shouldn’t care about AI ruining humanity. I mean I should, but what’s the point of that? It’s been almost a year since OpenAI released ChatGPT and funny, all this drama about their CEO Sam Altman being fired. It’s probably good timing for the media since they needed a boost on the AI storyline.
Microsoft, which owns a 49% share in the company, got one minute’s notice before the board ousted Altman last Friday. One minute! Satya Nadella, Microsoft’s CEO, had just shared the stage with Altman a week prior. Microsoft’s stock fell with the news of Altman’s ousting. And now there is talk that it may have backfired, and instead Altman could return as CEO and fire the board.
While the board of directors claimed “he was not consistently candid in his communications,” the real reason for his ousting is not clear. The company’s unusual structure gives its board full governance rights over investors, including Microsoft, and it’s been speculated the reason for the rift is ideological between board members and Altman. It’s possible they don’t see eye to eye over the risk of releasing more powerful versions of ChatGPT.
But the thing is, Microsoft has invested billions of dollars in the company and has used OpenAI’s technology (and PR) to launch new products this year, including Microsoft Copilot, which has somehow managed to make Microsoft cool again. Some have speculated adding ChatGPT to Microsoft’s virtually unknown search engine Bing could be the beginning of the end for Google’s hold on the search market, one that Microsoft owns a measly sliver of.
So you can bet that Friday was not a good day over at Microsoft with the news of OpenAI’s drama. And if Altman is reinstalled, this won’t happen again: because Microsoft will insist that OpenAI get a handle on its board.
I went down the AI rabbit hole hard this past year. ChatGPT was released on my birthday last year, late November, just as the big tech companies had started a round of layoffs that continued well into the spring. It was hard not to associate the layoffs with fear over AI replacing the workforce, and a bit of existential fog over what to do with my life.
I read books and white papers on AI, took online AI literacy courses, listened to podcasts. One of the most interesting books I read was called The Alignment Problem, which follows the history of AI from the 1950s to the present. If you worry about artificial general intelligence, the sci-fi type of AI worry, you fear the AI becoming smarter than humans and wanting to destroy us. If you follow that line of reasoning, the way to avoid that is by giving AI human values so that it won’t harm us. You align the AI’s values to ours.
Now if you really want to go down a rabbit hole, consider how you would train an AI on human values: and whose values you’d pick, and then how you’d give it the ability to adjust, and be nuanced about its values as we humans are. I still don’t understand how large language models or neural networks work, despite how hard I’ve tried. I sure as hell don’t understand how assigning human values would.
If we’re fortunate, we get to decide how we’re going to feel about the world. It’s a kind of privilege to educate oneself and dither over the fate of humanity. I spread out in our clean, wide bed and drift in and out of consciousness. Today I’ll kick off a new project with Microsoft, that’s based on OpenAI’s generative AI. I am well paid and gainfully employed, and the world will not end today. I’ll make a good buck by the end of it.
Categories: Technology, writing

Microsoft just announced they hired Sam Altman to lead their advanced AI research https://www.linkedin.com/posts/satyanadella_we-remain-committed-to-our-partnership-with-activity-7132274707701059585-cKA3?utm_source=share&utm_medium=member_ios
LikeLike
Nick Cave on AI is exactly how I feel…
LikeLiked by 1 person
Hi Ross! Is that you I think? Darn right, Cave or Drake, you nicked it…
LikeLike
The chime of the city clock & send not to know for whom the bell tolls. If AI uses observational reinforcement, to incorporate our “values” by inferring from our actions, that’s obviously a death knell.
LikeLiked by 2 people
Of all the reinforcement learning modes I hadn’t heard the observational one! Yeah it’s bonkers. Right after I posted this I read Microsoft has hired OpenAIs key talent, including Altman and cofounder Greg Brockman. Now they have direct access to the fastest jets, so to speak.
LikeLiked by 1 person
I don’t think it’s called “ observational,” I don’t remember what term they’re using, but it seems like observing/surveying/assessing/etc our behavior would have to be part of this? It all seems so impossible somehow. Twice blest is the computer that can quantify the quality of mercy, etc.
LikeLiked by 1 person
I seem to recall something about “reverse reinforcement learning.” Whatever it is, total deft move for Microsoft to snag OpenAI’s leadership and core team of scientists all in one go.
LikeLike
I rely on you for all my AI anxiety, Bill. Keep it up!
LikeLiked by 1 person
Thanks Kevin! Ample anxiety to go around, we don’t need to go digging for it do we?!
LikeLike
ChatGPT says no, no we do not. 😂
LikeLiked by 1 person
“It is what AI might infer about human ‘values’ that scares us.”
Jiminy Cricket
.
.
.
Or perhaps DD
LikeLiked by 2 people
Ha ha good one DD! Thanks and be well!
LikeLiked by 1 person
Serious Funny
LikeLiked by 1 person
Would AI “think” logically about which values predict the best outcomes (and what outcomes would it value–kindness, wealth, intellect…) or would it assume the most prevalent are the best.
LikeLiked by 1 person
From what little I know, the risk/fear (as I understood from Geoffrey Hinton) is that they develop their own motivations and see us as a barrier to fulfilling their goals. That’s assuming the whole sentience thing, and I guess it’s part of the ideological debate between OpenAI’s board members and Sam Altman, is that level of risk tolerance. OAI was founded to create safe AGI and their board had the power to serve as a checks/balances on the for-profit wing of the company. Seems the for-profit just won, so make of that what you will.
LikeLiked by 1 person
The idea of giving AI “human values” deliberately seems like a slippery slope. It’s already been somewhat corrupted. Best we can hope for is they use sources that are as unbiased as possible, and as verifiable fact based as possible.
But as you’ve studied AI more than I have, do you think it’s possible to teach an AI model ethics?
LikeLiked by 1 person
Thanks Dave. I like what Yann LeCun said recently about AI safety, that AIs won’t try to hurt humans because they aren’t social “creatures” and thereby won’t want to dominate. He referenced the fact that orangutans, unlike chimpanzees, don’t try to dominate because they are not social the way chimps (or humans) are. Interesting theory and hope he’s right ha ha! Comes down to whether or not they develop their own goals. LeCun holds they won’t, because we’ll control their goals. If that’s true, perhaps they can be taught to not do anything that would cause harm to humans or others. Hope it works! Playing with fire I worry, still! But maybe we will get beyond that sci-fi one day and it truly will be utopic.
LikeLiked by 1 person
Utopia is a pipe dream, even with an ideal AI. LeCun seems to forget that there will likely be many AI models. Many, probably even most will be created ethically. But there are always poor actors, looking to manipulate people for their own purposes. Those are the ones to watch out for. Let the buyer beware!
LikeLiked by 1 person
Yeah the utopia from Wall-E is not my cup of tea. Good film, albeit children’s on the surface, if you haven’t seen it…
LikeLiked by 1 person