Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

The advent of AI has sparked enthusiasm for its technological promise. However, it has also raised apprehensions regarding the potential societal implications. According to a study published that aims to guide policymakers in regulating the rapidly growing technology, today’s well-known artificial intelligence models lack much-needed transparency.

Advocating for and building better transparency and responsibility in AI will mean supporting AI models that have been thoroughly tested. Such transparency must be fully explainable to customers and employees alike, and ethical considerations must be firmly in place.

Now is the time to ensure that AI models are built the right way and are therefore of maximum benefit to all of humanity.

The Imperative for Transparency

Transparency is the first step towards identifying and mitigating biases and promoting a fairer use of AI.

The ethical rationale for AI transparency is grounded in two core principles: fairness and accountability. AI systems, which are frequently employed in decision-making, have the potential to inadvertently perpetuate bias when they are initially trained on biased data. The absence of transparency conceals these biases, resulting in unjust outcomes. In terms of accountability, transparency enables the examination of AI decision-making processes. In situations where AI decisions lead to adverse consequences, transparency facilitates the tracing of events and the identification of any fault or negligence.

Stanford University researchers have noted that "less transparency makes it harder" for "policymakers to design meaningful policies to rein in this powerful technology."

The Foundation Model Transparency Index graded 10 of the biggest AI models on 100 different metrics of transparency. A researcher familiar with such assessments has stated that companies should strive for a score between 80 and 100%. It is thus disheartening that the best score was that achieved by Meta’s Llama 2 at only 54%. Similarly disappointing, OpenAI’s GPT-4 scored 48% while Google's PaLM 2 only scored 40%, just above Anthropic’s Claude 2 at 36%.

Moreover, the Stanford study said that no company provides information about how many users depend on their model or on the geographic locations where they are used. Most AI companies do not even disclose the amount of copyrighted material used in their models, the researchers lamented.

Reduced transparency poses challenges for various stakeholders. These include: businesses assessing the suitability of commercial foundation models for applications; academics relying on these models for research; policymakers crafting effective regulations for this influential technology; and consumers seeking to grasp model constraints or address any resulting harm.

Only by employing appropriate tools, guidelines and governance do these important AI developers have the opportunity to establish an ethical AI ecosystem that upholds data privacy while simultaneously fostering innovation.

Regulations and the Drive for AI Transparency

Transparency has become a paramount policy concern for policymakers in the EU, the US, the UK, China, Canada and, indeed, the G7 as a whole. According to an AI solutions expert, the essence of transparent AI lies in the ability to adequately explain and communicate the outcomes of an AI model. It encompasses three key dimensions: the explainability of technical components, the governance of systems and the transparency of impact.

One of the primary drivers behind the push for AI transparency and explainability is the establishment of trust in AI systems. This offers users and stakeholders a higher level of confidence in the system's responsible use. Understanding the decisions a system makes and the processes behind those decisions also empowers individuals to have more control over their choices, enabling informed consent regarding their interactions with the system.

The potential implications of AI have also raised concerns at the political level, prompting countries worldwide to develop national AI strategies and announce various AI-related policy measures. Many of these initiatives aim to address public apprehension about losing control in the face of an AI revolution. With so many incidents of AI exhibiting unintended behaviors, the demand for transparent and responsible AI is on the rise, and the provision of a policy safety net is thus following suit.

The need for AI transparency is underscored by certain unfortunate instances, such as autonomous vehicles displaying risky behaviors, deepfakes impacting elections, algorithmic systems erroneously rejecting patient claims, AI voice technology generating conspiracy videos and chatbots providing harmful advice to individuals seeking treatment for eating disorders.

In a timely response, the European Union has introduced the Artificial Intelligence Act (EU AI Act), a foundational law establishing transparency standards for certain AI systems across its member states.

The EU AI Act ensures that AI systems used in the EU adhere to safety, transparency, traceability, non-discrimination and environmental standards. Specifically, generative AI models like ChatGPT must comply with transparency requirements, including disclosing AI-generated content, designing models to prevent illegal content generation and publishing summaries of copyrighted data used for training.

Article 52 of the EU AI Act is one such pivotal element in this necessary protection. Among other aspects, the article emphasizes the importance of transparency in AI systems that recognize emotions or categorize biometrics, acknowledging the escalating impact of these systems on users and highlighting the need for awareness to safeguard rights and privacy. Such proactive measures pave the way for a future replete with more transparent and accountable AI systems, a benefit for companies, users and society at large.

Additionally, the Group of Seven (G7) has called for collective action on AI, convening to establish an AI code of conduct for developers in order to promote safe, secure and trustworthy AI worldwide. The plan proposes that companies should publicly disclose reports on the capabilities, limitations and potential misuse of the AI systems they develop.

Transparency Essential for Trust and Accountability

As technology companies rush to deploy their AI models and as generative AI becomes increasingly prevalent in various products, a notable contrast emerges with the guarded nature of the most powerful models. In certain instances, limited access to these models is only available through a paid website, leaving developers and researchers in the dark about the intricate details of their inner workings.

Highlighting the fundamental significance of transparency, a special adviser to the European Commission emphasizes that it is not merely an important ideal but a crucial element for effective AI accountability. The absence of transparency impedes researchers from scrutinizing foundational models for potential biases or security threats. Moreover, regulators and the public remain uninformed about the inherent risks of AI technologies, particularly in crucial sectors like healthcare and law enforcement.

In the corporate sphere, employees and executives generally believe that AI can enhance operational efficiency. However, a report from the Work Innovation Lab reveals a critical discrepancy: many companies are not transparent about the purpose and usage of AI, potentially hindering recruitment efforts and deterring talent acquisition. The report notes that 85% of employees consider AI-related factors when joining a new company, with transparency about AI use being a paramount concern. However, only 32% of employees feel their company has been transparent about AI use, while a higher percentage of executives (44%) believe their companies have demonstrated sufficient transparency in AI implementation. The survey, featuring responses from over 4,500 knowledge workers in the US and UK, sheds light on the growing importance of transparency in organizational AI practices.

Similarly, a broader survey encompassing 14,300 consumers and business buyers across 25 countries reveals that 58% of UK customers believe that enhanced transparency regarding companies' use of AI could significantly bolster their trust in the technology. This underscores a shared sentiment that transparency is a key factor in fostering trust and confidence in the rapidly evolving landscape of AI applications.

Future Outlook

The push for AI transparency appears focused on key characteristics: heightened regulatory scrutiny; technological advancements in interpretability; increased global cooperation; and a growing emphasis on ethical and responsible AI practices. Such an overview represents no small task, to be sure. But the positive evolution of these trends stands to significantly influence how AI systems are developed, deployed and trusted in the years to come.

Pin It