Just a thought -
There have been a number of instances in the last 20 years - primarily in Star Trek, I think - where the author of a story posits that cybernetics gradually moves through a change from where the AI "just runs programs" (to quote Fisher Stevens as 'Ben' from the movie
Short Circuit. I am thinking of an SF story I read (
The Moon is a Harsh Mistress) where a supercomputer gradually develops a sense of self-awareness. Some of them posit sudden changes, like Johnny 5 or the nanites that Wes Crusher allowed to escape.
We just do not know what will happen when a computer with enough circuits and/or speed to simulate massive interconnections begins to operate in AI mode. I think that will continue to be of interest to SF writers/readers until and unless it actually occurs, and perhaps for long afterwards.
Of course, that's not counting HAL in
2001 A Space Oddyssy, or R Daneel Olivaw in
I, Robot, or Commander Data. They were built to be autonomous and self-aware.
About 40 years ago, as a student at the University of Minnesota, I was tasked to write a Turing Challenge program in Basic that gave the appearance of natural thought. I chose to do a psychological inquiry program - an AI shrink. I'm proud to say that I received an A+ grade on it. In just over 400 lines of code, I managed to create a system that recognised what was being answered and create a 'next question' based on each answer supplied by the user. The program exactly repeated its questions so infrequently that the instructor had to triple-check that the program printout and (in those days paper) tape I turned in actually produced those results.
