• btc = $69 818.00 - 185.63 (-0.27 %)

  • eth = $3 924.39 207.56 (5.58 %)

  • ton = $6.44 0.14 (2.18 %)

  • btc = $69 818.00 - 185.63 (-0.27 %)

  • eth = $3 924.39 207.56 (5.58 %)

  • ton = $6.44 0.14 (2.18 %)

9 Jan, 2023
2 min time to read

The conference organizers have stated that AI tools can be utilized to improve and refine the work of authors, however, text that is entirely generated by AI is not permitted.

The International Conference on Machine Learning (ICML) has prohibited the use of AI language models like ChatGPT to write scientific papers, sparking a debate about the role of AI-generated text in academia. The conference stated that papers that include text generated by a large-scale language model such as ChatGPT are only allowed if the text is presented as part of the paper's experimental analysis.

The policy has received mixed reactions from AI researchers, with some defending the decision and others criticizing it. The conference has also addressed concerns about ownership of AI-generated content and the issue of authorship in relation to AI-generated text. The ICML will reassess its ban on AI-generated text in the coming year.

The use of AI tools like ChatGPT has caused uncertainty and concern among organizations, leading some to implement their own bans. For example, Stack Overflow banned the use of responses generated by ChatGPT on its platform last year, and the New York City Department of Education recently blocked access to the tool on its network. One of the main fears surrounding AI-generated text is that it can be unreliable, as these systems are essentially autocomplete functions that are trained to predict the next word in a sentence based on patterns in the data they were trained on. They do not have a pre-defined database of facts to refer to, so their output may present false information as if it were true, as the plausibility of a statement does not necessarily confirm its accuracy.

The International Conference on Machine Learning (ICML) recently faced controversy over its decision to ban artificial intelligence (AI)-generated text in papers submitted for review. One concern surrounding this ban is how to differentiate between writing that has only been edited by AI and writing that has been completely produced by AI tools. Another issue is determining when a series of small AI-guided changes constitute a larger rewrite.

Additionally, some worry that a ban on AI writing tools could disproportionately impact researchers who do not speak or write English as their first language, as these tools can help them communicate more effectively with their peers. However, AI writing tools differ from simpler software like Grammarly in that they have the ability to make more significant changes to text and can generate novel text and spam.

According to Goldberg, it is possible for academics to create papers using AI, but there is little incentive for them to do so. He explained that even if a paper written by AI were to pass peer review, the author would still be held responsible for any incorrect information in the paper, which could damage their reputation. The ICML has stated that it is difficult to definitively detect AI-generated text and that it will not be proactively checking submissions for such text. Instead, the conference will only investigate submissions that have been flagged as potentially written by AI. Ultimately, the ICML is relying on traditional methods of enforcing academic norms and it will be up to humans to determine the value of text that has been assisted by AI.