Over 50% of Moroccans who are familiar with ChatGPT, a chatbox that can generate language using artificial intelligence, said they believe that it is dangerous for humanity, according to a survey from Sunergia Group, a market research firm.
The survey found that 30% of said respondents strongly agree with the statement that ChatGPT poses a threat to humanity while 22% said that they somewhat agree.
It also established that older Moroccans tend to be weary about the threat of the new disruptive technology, with 100% of senior citizens (65-year-olds and older) declaring that the chatbox threatens humanity, followed by 76% of Moroccans aged 35 to 44 years.
Moroccans aged between 45-54 are the least skeptical, with 68% saying they view the language processing model as a threat, the survey report indicates.
Since its launch in November 2022 by OpenAI, ChatGPT has steered widespread controversy about its ethical implications as well as the threat it poses to humanity.
Experts especially take issue with OpenAI’s motive behind developing the model, which is to create an ultra-intelligent AI computer that surpasses the parameters of the human brain. The company has said it is working on an ultra-intelligent AI computer that will be ready by the end of this decade.
In March, tech experts, academics, and world-renowned CEOs like Elon Musk signed an open letter calling for all AI labs to pause for six months the development of AI systems that are more powerful than ChatGPT 4. The letter garnered more than 33,000 signatures.
In recent years, many experts have also warned about the ethical implications of training a language model that could, in view, acquire consciousness at an advanced stage.
Others have taken an even more pessimistic stance on the AI revolution, warning of the possible apocalyptic consequences of advanced General Purpose AI Systems that could put an end to the very existence of the human race.
Responding to the widespread concerns about the threats associated with advanced General Purpose AI Systems, OpenAI announced this month that they have put together a team to “manage the risks” of developing the supercomputer.
On the other hand, governments and international organizations are taking strides toward regulating AI development.
Of these, the European Union is by far the most progressive. On June 14, the European Parliament approved the EU AI Act, a bill that would make it mandatory for models like ChatGPT to disclose all AI-generated content among other measures.
Regulatory authorities in the US have also introduced the National AI Commission Act, which aims to establish an organization responsible for determining the country’s approach to AI.
Additionally, regulators in the U.S. have been vocal about their intentions to regulate the technology. On June 30, Senator Michael Bennet wrote a letter to prominent technology companies, including OpenAI, urging them to implement labeling for content generated by AI.