in

OPENAI Bannit of Chinese users who used Chatgpt to create a monitoring tool

Source link : https://news7.asia/news/openai-bannit-of-chinese-users-who-used-chatgpt-to-create-a-monitoring-tool/

OPENAI, the chatgpt editor, has banished several networks of malicious actors from its users, working in all likelihood in China. In a security report posted online on Friday, February 21, the company says it has detected a disinformation operation as well as the creation of a social networks monitoring tool. The US company security teams detected these users by monitoring Chatgpt uses, its famous conversational agent. By studying the activity of a group of accounts working in Chinese language and at Chinese office hours, Openai was able to reconstruct the contours of the tool on which they worked: a program analyzing in real time the messages of Facebook, X, YouTube, Instagram, Telegram and Reddit – particularly targeting appeal to demonstrations in favor of human rights. Called “Peer Review” by Openai, this tool would be responsible for reporting these messages to the Chinese authorities as well as its embassies abroad. Among the supervised countries are in particular the United States, Germany and the United Kingdom. And among the spiral themes: support for Uighurs as well as the diplomacy of the Indo-Pacific region. Chatgpt was employed by these actors to write the commercial argument of “Peer Review”, describe its modules in detail and drive out errors in its computer code (which seems to use Meta Llama, an open source of Chatgpt). Openai failed to determine whether this tool was used on a large scale. Translation of OPENAI propaganda articles was also able to follow the activity of a second network of Chinese actors engaged in disinformation operations. They use Chatgpt to generate short messages published on social networks, notably denigrating the publications on X of the dissident CAI XIA, former professor at the Central School of the Chinese Communist Party (CCC). Present at the summit for artificial intelligence organized in Paris at the beginning of February, Ben Nimmo, principal investigator at Openai, had mentioned a similar example foiled by his teams recently. Chinese users had used Chatgpt to strengthen the “spamouflage” disinformation campaign, using the generative AI tool to write tweets favorable to CCP and hostile to the West, as well as to create critical sites of dissidents ‘stranger. They were confused by the OPENAI teams because they had used the same Chatgpt accounts to cheat during exams for internal promotions to the Communist Party. The group targeting Cai Xia is probably different. Its actors used Chatgpt to produce long articles criticizing the United States, which they managed to publish in a dozen Mexican, Peruvian and Ecuadorian newspapers-sometimes in the form of sponsored content. More specifically, AI has been used to translate and extend pre -existing Chinese articles emphasizing discord subjects such as political violence, discrimination, sexism or foreign policy, by explaining them by the weakness of American political leadership . OPENAI has published security reports for more than a year whose objective is, among other things, to “prevent the use of AI tools by authoritarian regimes to strengthen their power or their control over their citizens, to threaten or force other states or to carry out influence operations under cover ”. Its investigators themselves employ tools based on AI. Read also | Article reserved for our subscribers How to muscle scams and online propaganda: no tsunami but growing risks read later the world reuse this content

Source link

Author : News7

Publish date : 2025-02-26 02:01:34

Copyright for syndicated content belongs to the linked Source.

A sweeping power blackout leaves most of Chile in darkness