22
Dom, Dic
0 New Articles

Reports and Coverage
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

As a logical reaction to the potentially powerful capacity of artificial intelligence tools such as Microsoft-backed OpenAI’s ChatGPT to have a negative impact on societies and businesses, US regulators have taken up the task of establishing new rules governing such technologies.

The US Department of Commerce, which is responsible for facilitating conditions for economic growth and opportunity in the country, has called on industry players' input to draft regulations pertaining to AI.

"Just as food and cars are not released into the market without proper assurance of safety, so too AI systems should provide assurance to the public, government, and businesses that they are fit for purpose," the Commerce Department said in a statement.

"Our inquiry will inform policies to support AI audits, risk and safety assessments, certifications, and other tools that can create earned trust in AI systems," said Alan Davidson, assistant secretary of commerce.

Separately, European authorities have deepened their inquiry into the chatbot days after Italy temporarily banned its use following OpenAI’s disclosure of temporarily taking the tool offline on March 20 to fix a bug that allowed some users to see the subject lines from other users’ chat histories.

Italian regulators have maintained that OpenAI has no legal basis to engage in massive data collection and questioned the way it is handling the information it has gathered.

European authorities, including those of France, Ireland and Germany, have since approached their Italian counterpart to try to establish a common position on ChatGPT. Even Canada's data regulator said it was opening an investigation into OpenAI.

France's CNIL, regarded as the most powerful European data regulator, confirmed it has received two complaints relating to the privacy policy and false personal information.

The European Union's central data regulator is forming a task force to help countries deal with ChatGPT. Europe's central regulator, the EDPB, said its members chose to take action after monitoring Italy's approach.

"The EDPB decided to launch a dedicated task force to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities," the body said.

Spain's AEPD data protection agency also said it had opened an inquiry into the software and its US owner, saying that while it favored AI development, "it must be compatible with personal rights and freedoms".

Similarly in China, where ChatGPT is inaccessible, a new set of draft rules for ChatGPT-like services have been rolled out. According a proposal by Cyberspace Administration of China (CAC), companies offering generative AI services and tools in China must prevent any forms of discrimination, fake news, terrorism, and other anti-social content. In the event of a breach, providers must re-train their models within three months to prevent a recurrence of the banned content is discovered or reported. Violations of the rules can result in fines of up to 100,000 yuan (approximately US$14,520) and/or service termination. The draft regulations are open to public opinion until May 10.

US AI research lab OpenAI’s ChatGPT is fine-tuned from GPT-3.5, a part of the GPT-3 language model trained to produce text. GPT stands for Generative Pretrained Transformer. Such “transformer” models are sequence-to-sequence deep learning programs that can produce a sequence of text given an input sequence. ChatGPT was optimized for dialogue by using Reinforcement Learning with Human Feedback (RLHF), a method that uses human demonstrations to guide the model toward achieving human-like behavior.

As such, it is capable of producing articles, essays and even poetry in response to user inputs or prompts. ChatGPT uses generative artificial intelligence (AI) algorithms to create content, including audio, code, images, text, simulations and videos, from large data sets fed into it from the internet. AI-generated art models like DALL-E (a combination of surrealist artist Salvador Dalí and Pixar robot WALL-E) can create extraordinarily beautiful images on demand. Generative AI falls under the broad category of machine learning.

Following the release of its latest version, GPT-4, tech billionaire Elon Musk and over 1,000 tech researchers and executives have called for a six-month “pause” on the development of advanced artificial intelligence systems such as OpenAI’s GPT to stem what they perceive as a “dangerous” arms race.

In response to such strong scrutiny, OpenAI has maintained that it is "committed to protecting people's privacy" and believes the tool complies with the law. OpenAI also maintains that its AI systems be subject to "rigorous safety evaluations" and has said that comprehensive regulation of the sector is needed.

All said and done, at this point in time, tools like the ChatGPT still have legal leeway to mismanage sensitive company information.

Unless clearly defined regulations for AI chatbots are established, the risk of stolen privacy is a legitimate one.