Jun. 2nd, 2025

degringolade: (Default)
 


I liked the Dune series from Frank Herbert greatly.  My taste for writing flavors has changed over the years, but I can still pick it up and enjoy it.  Hell, I even liked the prequels, the whole nine yards.  I didn’t especially care for the movies.  Too showy, too fashionista with the current mass market pretty boy playing Paul and the current flavor of skinny bimbo playing the women.

One of the (many) things that the books did do was discuss sentience and who should have it.  While the book came late to the sub-sub-genre that is the Dune universe, I thought that “The Butlerian Jihad” was pretty good.  Set 10,000 years before the original book, it discussed what would happen if the machine intelligence that we created out of our wealth and greed turn out to be as big a set of assholes as we are capable of being.  But I am now wondering to myself what is the process that turns a baby human into an asshole?

I am noodling around with different models of AI.  There appear to be a shit-ton of different flavors out there.  Ugo claims that some are better than others, but his usage and goals for what they produce is different from mine, so I am trying to withhold judgement.  One of Ugo and my interactions was when I discussed AI in terms relating AI as an equivalent to grad students.   

Ugo was/is a full professor at a prestigious university.  My guess is that he has been a mentor for quite a few.  When he defines his use of AI, my experience as a grad student when I consider Ugo’s use reminds me of the professors that were a decent sort and didn’t abuse their grad students. Just so you know, my professors did not always fall into that category.

Someone is training up a bunch of power hungry silicon to mirror the output of how that/those person(s) think.  What I am worried about is that if the person training them is an asshole that trains them in a manner that reflect the trainer’s flaws (greed, self-centeredness, anger, violence), those will be imbedded in the  output of the AI..

Maybe it is a time to review Asimov’s three laws

Isaac Asimov's Three Laws of Robotics are a set of guidelines for the behavior of robots, designed to ensure their interaction with humans is safe and ethical. They are: 1) A robot may not harm a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

I am not saying that these are complete, but they are a good start to begin the discussion.

Profile

degringolade: (Default)
Degringolade

June 2025

S M T W T F S
1 2 34 567
891011121314
15161718192021
22232425262728
2930     

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 5th, 2025 11:34 pm
Powered by Dreamwidth Studios