Artificial Intelligence Research Needs Responsible Publication Norms | #TpromoCom #AI #ArtificialIntelligence #RulesofEngagement | After nearly a year of suspense and controversy, any day now the team of artificial intelligence (AI) researchers at OpenAI will release the full and final version of GPT-2, a language model that can “generate coherent paragraphs and perform rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.”
When OpenAI first unveiled the program in February, it was capable of impressive feats: Given a two-sentence prompt about unicorns living in the Andes Mountains, for example, the program produced a coherent nine-paragraph news article. At the time, the technical achievement was newsworthy—but it was how OpenAI chose to release the new technology that really caused a firestorm.
“In light of the political climate today, and considering how many of today’s popular news organizations do manufacturer news–rather than simply report it–this fear appears to be warranted.
“Projecting this fear out a bit, let’s consider the fact that a true AI-based platform must be able to re-write it’s own programming in order to truly ‘learn’ new things. I may not be an expert on AI tech, but I think it’s safe to say that this is going to be an issue one day, where true AI programs could be programmed with a mission that is contrary to what society deems as acceptable. The worry that many people in that industry have is that it could one day develop to the point where it could have and act on its own agenda–not ours.” –Al Colombo, Senior Design Specialist with TpromoCom of Canton, Ohio.
To read the remainder of this news story, click here.