Back in 2015 this one, I think.
1. Do you agree with technologists and scientists like Elon Musk, Bill Gates and Stephen Hawking that AI could be detrimental to humanity? IE, in the short term, affecting jobs and the economy and in the long term potentially dangerous?
At one time I worked in a factory filled with CNC mills and lathes. On occasion I programmed and ran a massive machine that drilled, bored, tapped and milled engine blocks using tools from a carousel containing 32 of them. Once set up it did this at a rate of a few hours per engine block. It was inhumanly fast and accurate and probably did a job it would have taken ten, if not more, skilled milling machine operators to do in the same time. So in essence the Luddites were right about machines. The same rule can be expanded for computer controlled machines and, of course, the more intelligent they get the more human jobs they can take. At present they’re taking over repetitive tasks but as time goes on your solicitor, lawyer, doctor, surveyor and many more besides will probably be artificial. I don’t see this as a problem as far as the quality of the work is concerned. However, our society will have to change radically. Quite simply, if machines are doing all the work, who earns the money to pay for that work? Capitalism would collapse and the detrimental effect might be that we would end up under some hideous centrally-controlled authoritarian socialist regime. But I can see the optimistic side too. Through technology the human condition has always improved, and the result of the above may be more utopian than dystopian, especially if that central control is by machine, who would lack many of the detrimental drives of human politicians.
As for the potential dangers long-term you first have to get aboard with ideas about the AI singularity and I’m not sure that I am. Yes, technological development has ever been on an upward exponential curve, but I’m wary of this idea of a sudden leap taking things beyond human conception. This ‘rapture of nerds’ is too much like religion for the tech-head for my liking. Yeah, we’ll get to AI, but necessarily build it from the nuts and bolts upwards and understand the process all along the way. It will impinge on our lives in much the same way as all our other technologies: science fiction one day then part of our lives the next – taken with a shrug and a, ‘What was all the fuss about?’ I also think it highly likely that as we get to AI we’ll also be upgrading humans too and there’ll be a point where, on the mental plane, it’ll be hard to distinguish us from our creations.
2. In their Future of Life open letter, Musk, Hawking and others say that AI could also be beneficial to mankind, provided that it does what we want it to do. Do you think that researching the risks will be enough to prevent adverse effects? Or do you think that creating another sentient race of any kind (robots, androids, cyborgs, software AI) can’t be risk-proofed because, by its intelligent nature, it will have its own goals and ideals?
Well they’re covering their arses both ways aren’t they? AI could be a danger and it could be beneficial. This is basically a statement that can be made about any new technology and rather undermines any point they were trying to make.
There will be dangers with AI, just as there were dangers with the car, with electricity, with the chemical industry. The biggest danger I suppose is how it is used by us. Nuclear weapons are the same – they could destroy our civilization, but only if we use them for that. Killer robots are a real possibility, if not a reality now, but the best ones are unlikely to end up in the hands of anyone who wants to destroy everything. In the end it all comes down to how they are used and how they are programmed. An artificial intelligence per se will be without the kind of evolved and sometimes destructive drives we have … unless they are put there by us. Yes, AI could develop its own goals and ideals, but I still don’t buy into the ‘rapture of nerds’ and the idea that it could become an all-powerful force. And again, I also think that by the time it’s becoming that effective we will struggle to distinguish it from ‘evolved intelligence’.
3. Do you think the development of AI is inevitable? Is it also necessary, eg for space colonisation, solving world problems like energy, climate change, etc?
It would certainly be very useful for space colonisation and many other tasks where putting a human in place can be difficult. If fact, any problem becomes more solvable the more brain power is applied to it. Yes, I think AI is inevitable. It’s arguable that it’s already here.
4. What do you think science fiction about AI can teach us about how to conduct research in the field?
Don’t leave out the ‘off’ switch?
5. Which are the most important writers of AI sci-fi and why are their works so influential? Which writers should researchers be listening to?
Science fiction plays with many ideas and by a general reading of the more up-to-date stuff researchers can glean some ideas. But the researchers are the experts, not the SF writers, and if anything the flow of ideas goes the other way. Mostly, I hope SF is something to instil enthusiasm for what they are doing in those researchers. Well in fact, in some cases, I know it is.
6. Are there lessons we can learn from sci-fi about driverless cars, autonomous drones, learning algorithms and other technologies that exist now?
Not a lot. SF writers (mostly) aren’t technologists, traffic control experts, military tacticians or high level programmers but generalists. And SF gets things wrong a damned sight more than it gets things right.
7. In your own work, something that comes up is the difficulty in creating an AI for a specific purpose (eg war drones) that is then left directionless once that purpose is over (the end of the Prador war). Do you think it’s as dangerous to create an intelligent machine that we purposely restrict as much as possible as it is to give that machine self-determination?
I guess you might end up with some problems if you repurposed a war robot as a traffic cop and didn’t take away its guns. But really I don’t think the purpose some AI has, or has been made for, will be so permanent. It’s difficult to re-educate a human trained or indoctrinated to kill because we don’t know how to take one apart and put it back together again, physically or mentally. In fact we’re only just dipping in to figuring out how we work. AIs, because we will have created and understood everything that goes into them, should be much more malleable. My war drones are really a cipher for the hardened combat veteran trying to adjust to peacetime.
8. Is there any specific part of your own work that you hope AI researchers pay attention to?
I just hope they read and enjoy it when they’re not working, and return to what they do well with enthusiasm. Though I wouldn’t mind if that enthusiasm became directed more towards memplants, mental uploading and other human enhancements.