Chris Zielinski, UK writes
"Considering the kind of human inpout such reviews currently require - reading, and understanding everything that is being published in the topic under review, selecting, a few key papers and rejecting many duplicate/rehashed or faulty papers (including plagiarism/self-plagiarism) - I doubt if dumb software could ever manage this alone."
It's not really a matter of the software being dumb or smart, more that any software can only be trained by the data that is available from the past, whereas a new result is really new.
"Human input to this process will continue to be needed in almost any scenario of AI development."
Sure. A combination of human teaching and machine learning can win the day. It's the approach I take in Bims: Biomed News
to help users to maintain reports on recent additions to PubMed. My users marvel at how good the results of machine learning are. They forget that they themselves taught the machine what to learn. I guess my users are too modest to heap praise on themselves.
"He began to wonder if the system was producing accurate results - and more than that, if they were the best results. The algorithms were too complex to check - it would have taken forever - so they scrapped the system and went back to manual."
Well ... if you take a standard machine learning algorithm implemented an a standard package---tensorflow or libsvm or whatever---you can be pretty sure it's a battle-proven piece of software. While all pieces of software have bugs, the bugs that are in such pieces of software are unlikely to render the results invalid.
Yes, machine learning methods tend to be "black box", i.e. when you get a result, well that's it, there is no way to trace it to something specific. If they fail, it's most likely that the learning input was incorrect or incomplete, rather than that the particular learning method was the wrong choice, or its implementation was buggy. Or worse you could be in a case where I was when I built Biomed News. I spent the better part of two months searching to find why my results were systematically poor ... only to find that the sorting that I did based on the results had a trivial bug. The complicated machine learning, where I thought the error was, was perfectly fine. Sometimes the human is just dumber than the software.
HIFA profile: Thomas Krichel is Founder of the Open Library Society, United States of America. Professional interests: See my homepage at http://openlib.org/home/krichel Email address: krichel AT openlib.org