![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
A Dune Digression
I liked the Dune series from Frank Herbert greatly. My taste for writing flavors has changed over the years, but I can still pick it up and enjoy it. Hell, I even liked the prequels, the whole nine yards. I didn’t especially care for the movies. Too showy, too fashionista with the current mass market pretty boy playing Paul and the current flavor of skinny bimbo playing the women.
One of the (many) things that the books did do was discuss sentience and who should have it. While the book came late to the sub-sub-genre that is the Dune universe, I thought that “The Butlerian Jihad” was pretty good. Set 10,000 years before the original book, it discussed what would happen if the machine intelligence that we created out of our wealth and greed turn out to be as big a set of assholes as we are capable of being. But I am now wondering to myself what is the process that turns a baby human into an asshole?
I am noodling around with different models of AI. There appear to be a shit-ton of different flavors out there. Ugo claims that some are better than others, but his usage and goals for what they produce is different from mine, so I am trying to withhold judgement. One of Ugo and my interactions was when I discussed AI in terms relating AI as an equivalent to grad students.
Ugo was/is a full professor at a prestigious university. My guess is that he has been a mentor for quite a few. When he defines his use of AI, my experience as a grad student when I consider Ugo’s use reminds me of the professors that were a decent sort and didn’t abuse their grad students. Just so you know, my professors did not always fall into that category.
Someone is training up a bunch of power hungry silicon to mirror the output of how that/those person(s) think. What I am worried about is that if the person training them is an asshole that trains them in a manner that reflect the trainer’s flaws (greed, self-centeredness, anger, violence), those will be imbedded in the output of the AI..
Maybe it is a time to review Asimov’s three laws
Isaac Asimov's Three Laws of Robotics are a set of guidelines for the behavior of robots, designed to ensure their interaction with humans is safe and ethical. They are: 1) A robot may not harm a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
I am not saying that these are complete, but they are a good start to begin the discussion.
no subject
yep....hugely problemmatic
But, as I said, the discussion starts somewhere.
no subject
Found it!! https://en.wikipedia.org/wiki/With_Folded_Hands_...
The thing is despite our affection for the mantra "Safety First" if we ever want to do anything other that sit quietly on a cushion it's really more like "Safety Third." It's a priority but it not the number one.
I'd rather go with the Bultlerian law. "Thou shalt not make a machine in the image of a human mind," or words to that effect. But it's clear that isn't happening. Too bad.
Not arguing
But truthfully, if you take anything to the logical extreme, the results will be chilling to someone somewhere.