Brute force AI : You don’t train the robots!

Isaac Asimov established three laws for the AI world:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

While studying A.I. we never questioned the validity of the 3rd law! What is existence — for a robot? Did we ever question the existence of the refrigerator and ever wondered how it *felt* when we are away for a long holiday? How is a robot different from a refrigerator?

Automatic Doors Have Feelings?

For most of us human-kind we are yet to understand consciousness at an individual/self level, forget re-imagining it(ha ha ha ha ha!) for a mechanical thing! And now we come to the moot point of this post – Why on earth are we training the robots?

Allow me to put things in perspective:

We train things where there is scope of learning. Try to picture this: need to learn = knowing you’re lacking something + you value what you want to arrive at!

Can a robot fit into this definition?

Would you ever want to teach your refrigerator to not give ice cold water to the elderly guest coming for the evening?

Would you ever want to teach your television to turn the volume down during the wee hours of the day?

Just because the refrigerator and the television have evolved to the structure of a human body, do we start believing them as humans?

Why should a human body only evoke human feelings of caring/empathy?

As the robots evolve in physical,analytical capabilities, we as humans are bound to develop attachment to them! Its our nature to en-lifen what crosses our path!

The problem comes when the thing starts to get reciprocated and we go into the spiral!

That cannot happen!

We waste precious time thinking/exchanging human feelings with a machine trained & programmed to behave as humans! Already there are dozens of problems to be solved and addressed at the individual level and we bring an inconsequential no-op to the equation!

Bigger / Better Things to figure out?

Human-Kind put to shame at the end of the following video:

For the sake of humanity, we should not do such things! We are trying to train the robot to complete its work, inspite of the human obstructing it to complete its task? I leave the dark consequences of this dare for you to figure out! I will summon The Terminator to my help!

Call this guy now!

To summarize:

  • You don’t train a robot (You are wasting your time if youre doing this)
  • You program a robot (You do it once and forget it for-ever!)
  • You can train a human (You may come across donkeys too!)
  • You cannot program a human (Except self, tread with caution!)

Please don’t code the heart into a machine!

(Brought to you by yet another reflection/inspiration via @ideapreneur)