Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Now that content generated by artificial intelligence is slowly but steadily entering the mainstream, it’s a good time to assess the implications that can impact the way content is consumed or should be consumed.

Generative AI tools such as ChatGPT, PaLM or Claude, among others, can generate relatively well-researched written content in seconds (which would usually take humans hours or even days to compile). AI technology's computing capabilities expedite the processing time for exploring extensive data and generating prepared responses.

Similarly, in gaming, AI can create engaging games in a relatively short time and with little resources, in addition to customizing the overall gameplay experiences.  Generative AI tools employ advanced machine learning techniques; deep learning models such as generative adversarial networks (GANs), variational autoencoders (VAEs), and large language models, (LLMs) among others to achieve such a feat. Once trained on volumes of datasets, these models are capable of understanding patterns and underlying arrangements.

GenAI is set to disrupt jobs generally performed by humans as the automation of tasks takes precedence. S&P Global Market Intelligence's 451 Research VoCUL: Connected Customer, Consumer Representative survey conducted in Q2,  2023,  revealed a decline in the 'mostly positive' sentiment about the impact of AI on careers among higher-education respondents, dropping from 22% to 14% since the launch of ChatGPT last year.

Content Generation: Uncovering the Source

In recent news, The New York Times charged AI development companies, OpenAI and Microsoft, for copyright infringement over the unauthorized use of their published work to train AI technologies. The lawsuit contends that The Times’ millions of published articles were used to train automated chatbots that now compete with the news outlet as a source of reliable information.

These advancements cast significant doubt on the matter of content generation. Over the years, human writers have contributed extensively to developing a credible knowledge base for the benefit of humans by meeting the highest standards of fact-checking and research that are difficult to replicate by ‘machine learning.’ The creativity, empathy and expertise of human writers remain unquestioned to this day. However, GenAI tools are being trained on such datasets; while the billions of dollars in investments are being diverted to these companies without a decent compensation set aside for the actual data creators or owners.

This form of intellectual property ‘misuse’ has not reached an agreeable resolution so far. Ideally, such a resolution should come with a commercial agreement between GenAI companies and data creators with technological guardrails around generative AI products.

Adding fuel to the fire is the issue of ‘deep fakes’, which are generated by AI to create hyper-realistic images and voices of people that could easily convince the unaware audience to react to its intended provocations. In recent times, deep fake tech has been used for marketing, political satire, and entertainment. This technology facilitates the rampant and quick distribution of misinformation, especially those associated with acts of cyber warfare.  Furthermore, the amount of confusion deep fakes can inject into already-complicated electoral processes, political campaigns and hate-mongering around the globe is uncomfortably numbing.

Also Read: UAE Launches ‘Generative AI’ Guide for Key Industries

No Bars on Innovation

Despite the apparent notion of GenAI displacing humans in various aspects of work or jeopardizing societal perceptions, the unlimited potential of the technology across sectors should not be left unexplored. GenAI's proficiency in generating new content, leveraging natural language processing and multimodal interaction (text, image, voice, etc.), enables it to mimic human thinking and interactions.

An illustrative example is its application in overcoming language barriers in business communication. Frequently, individuals speaking different languages struggle to comprehend each other during business discussions, leading to a form of ‘communication chasm.’ However, with the help of GenAI, some phones and video conferencing devices come with real-time language translation features. Hence, any conversations in native tongues can be interpreted in the participant’s preferred language. Such innovation has the potential to completely transform business communication models. GenAI has the potential to facilitate greater participation in global communities by producing digital content accessible to non-English speakers. This could catalyze inclusive global growth driven by digital transformation.

In the UAE, the potential of GenAI has been well-realized. During a global economic session, Omar bin Sultan Al Olama, the UAE’s Minister of State for Artificial Intelligence, Digital Economy, and Remote Work Applications, recognized GenAI as “the driving force of the future digital economy.” He underscored the importance of “cultivating digital skills” and ensuring the “agility of talents to adapt to new technologies to keep pace with global changes.”

Also Read: Embracing GenAI: Reshaping the Work Environment

Developing Guardrails for AI

The role of the telecommunications sector in ensuring a robust digital infrastructure to harness the potential of GenAI cannot be underestimated. Adaptable legislative frameworks demand a concerted effort from both the public and private sectors to solidify mutual economic prosperity. Technological companies can leverage the potential of GenAI in this pursuit.

Since the technology is evolving, developing permanent guardrails may not be practical at this point. However, the industry stakeholders, including governments, telecom operators and regulatory bodies must not stop exploring and studying the challenges and potential opportunities of cutting-edge technology such as AI.

The EU has released the AI Act— the world's inaugural comprehensive AI law. It is anticipated to define rules, obligations for providers and users, and categorize various risk levels, as outlined by the law. Even China and the US have come up with laws regarding AI that aims to provide the necessary restrictions to companies providing GenAI services. However, the scope of regulating AI is understandably wide-ranging given the geographical, technological and cultural differences across the globe.

Despite the challenges, the regulation of content generated by GenAI must be considered with utmost seriousness. Content governance standards such as the EUs Digital Services Act (DSA) demonstrate a move in the right direction. Under article 40 of the DSA, vetted researchers can request data from very large online platforms (VLOPs) and search engines (VLOSEs) to research systemic risks in the EU.

Recently, a formal request for information was sent out to 17 platforms, including Amazon, YouTube, Facebook, TikTok and Apple to see how they were complying with the requirement under the EU's new DSA since being launched last year. The DSA includes a provision granting researchers unprecedented access to the data of VLOPs and VLOSEs to enable a deeper understanding of the impact of online media on society; and support the effective oversight of DSA compliance.

Also Read: Power on the Page: Why Internet Firms Must Rein In Their Content