Tuesday, September 6, 2016

Caught Red-Handed


At the heart of much speculative fiction (and fiction in general) is a question. What if? On Tuesdays I like to throw one out there and see what you make of it. Do with it as you please. If a for-instance is not specified, feel free to interpret that instance as you wish. And if you find this becomes a novel-length answer, I'd appreciate a thank you in the acknowledgements ;)

The future is now. Apparently, there is now a possibility of catching a thief who attempts to steal your phone. Of course, this hasn't been implemented yet (and it may never be), but just the fact of it...

What if you had an artificial intelligence assisting you? Should it be programmed to obey every law? What if you were doing something a little shady with its assistance? Should it inform on you if you do something illegal?

32 comments:

  1. Wow...that's a pretty deep question. There are actually experts who have gathered at Stanford University to predict what A.I. will do to our world by 2030 and the ethical things we need to work out. Did you hear that self-driving cars could make a decision about how to handle an accident to minimize deaths and injuries? It could mean one passenger/driver dies to minimize the deaths/injuries of others. Do we want cars making those decisions? It's a tough one!

    ReplyDelete
    Replies
    1. I did hear that about self-driving cars. A member of my writers group wrote a short story exploring that very topic (last year--she's pretty prescient about things like that).

      Delete
  2. I'm already worried about the cameras that catch you speeding or running a red light and sending you a ticket in the mail.

    ReplyDelete
  3. I don't think we should program it to obey every law. Some of us break a law to avoid an accident, like speed up if we think someone is driving erratically or has cut other cars off, etc. I think if they were programmed like that, they would be too rigid. Our lives are lived with a lot of flexibility I think for this to work.

    betty

    ReplyDelete
  4. I could see for serious things like murder but eh on some others. Most people go a little over the speed limit and technically breaking the law even a mile over it and would be rather ridiculous to report that.

    ReplyDelete
  5. No, it shouldn't be programmed to obey every law. There are always exceptions to the rules.

    Personally, I'm a little concerned about AI. I think they aren't good for encouraging humans to think for themselves.

    ReplyDelete
    Replies
    1. Let's just hope all of the fictional problems they've had inform the programmers.

      Delete
  6. Very interesting questions. Could A.I. be trusted with judgment calls on right and wrong? Things aren't always as they seem and a wrong call could result in someone being killed. For example an undercover police officer who appears to be a criminal, but is actually gathering intelligence for criminal prosecution.

    ReplyDelete
  7. I am such a total 'rule follower' that I'm afraid I would be very boring!!

    ReplyDelete
    Replies
    1. Who knows? Maybe you might trip up the AI's right vs wrong sensors.

      Delete
  8. It's kind of a hard question for me to answer. I can't imagine breaking any laws, so part of me is like "Why wouldn't they program things this way?" But that probably isn't the only way of looking at things. Things can malfunction, for example, and then the owner could be in a lot of trouble for no reason.

    ReplyDelete
  9. If it's helping me to do something shady, I don't want it reminding me I'm...doing something shady. lol Rather, I'd want it to be smart enough to help me get away with it. ;) (Yes, that is my silly answer.)

    ReplyDelete
    Replies
    1. No, good point. The idea came from a TV show where a character was doing something not quite above board but for the right reasons.

      Delete
  10. Your commenters are all thought provoking. Drawing on my science fiction reading, there are Isaac Asimovs's "Three Laws of Robotics", built into the positronic brain of every robot manufactured (in Asimov's science fiction world) the first one being "A robot may not injure a human being or, through inaction, allow a human being to come to harm." So, under that law, an AI could indeed be programmed to be agreeable to breaking laws, as long as breaking such laws didn't cause injury to a human being. And all of us break laws every day, even if we are totally honest. (Speeding, littering or jaywalking, anyone?) (The second and third laws are: A robot must obey orders given it by human beings except where such orders would conflict with the First Law; and
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Alana ramblinwitham.blogspot.com

    ReplyDelete
    Replies
    1. Ah yes, Asimov's laws. I am familiar with them. If they conflicted with what the person wants to do...

      Delete
  11. I should not be allowed to have such a thing. I would abuse it horribly and then feel guilty the next day. A guilt that I would get over by the next time I needed it for something.

    ReplyDelete
  12. I think people would go nuts if that were to come about lol

    ReplyDelete
    Replies
    1. The AI, or the AI that helps you get away with illegal stuff?

      Delete
  13. Doing unethical things isn't my cup of tea. If I know I could get away with it I'm pretty sure I wouldn't.
    I tell people I'm 97% honest.
    Coffee is on

    ReplyDelete
    Replies
    1. I bet that's a pretty good percentage for the rest of us as well.

      Delete
  14. As usual we have all sorts of people in this world and some may take advantage of such a system criminally.

    ReplyDelete
  15. Frankly I'd just like a phone that can beep me when I'm in the vicinity of annoying people. That would be an innovation.

    ReplyDelete
    Replies
    1. You know, that might be closer than you think. They have apps that tell you when friends are close by. It's not a huge jump to get to annoying people.

      Delete
  16. Yikes. I always worry about giving technology too much control.

    ReplyDelete
  17. Oh gosh no! I mean there are some things that people should not get away with like texting/talking on the phone. I am thinking of some people who are on a fixed income and if they work a little just to make some extra money to pay bills, they would be deducted dollar for dollar. Some work under the table and I M fine with that over all. I would not like being watched like this at all.

    ReplyDelete
    Replies
    1. I guess it would be situational based, then.

      Delete
  18. This reminds me of this little tracker things you could install in your car to get a lower rate on your car insurance. I think they never quite took off because people were worried about privacy.

    Then you've got the self-driving cars. And while we are apprehensive about it here in the US, I've got friends in Germany that are all for it. They think it will be safer.

    *shrugs*

    I'd love to have help, but I'd prefer it be "my help" and not some government law enforcer. I guess I'm thinking in terms of like the Jetsons tv show where the robot maid made food, cleaned, watched the kids, etc… as opposed to breaking into banks, stealing lollipops from babies, or what-have-you. I kind of like the idea of something similar to the iphone, where you can use it, but it neither helps nor deters you from getting into trouble. And then the government has to pay someone to break into it for any information that would give you away … which is not to say I think the government was right for doing so.

    ReplyDelete
    Replies
    1. Ah yes, kind of a neutral party. I think that's where we're going, but that might not necessarily be the best scenario.

      Delete

I appreciate your comments.

I respond to comments via email, unless your profile email is not enabled. Then, I'll reply in the comment thread. Eventually. Probably.