Wednesday 1 November 2023

Critical Question for Governments and Global Leaders: AI Foundation Models or Application Layer Regulations?

Dear Friends and University Students,

Today, Andrew Ng, Chairman of Coursera, Founder & CEO of Landing AI, Founder of DeepLearning.AI, Managing General Partner at AI Fund, with a B.S. from Carnegie Mellon University, M.S. from MIT, and a Ph.D. from the University of California, Berkeley, as well as Director of the Stanford AI Lab, raised an important issue regarding the "Risk of regulations for companies developing foundational AI models in the name of National Security".

You might be wondering, what is a Foundational AI Model? These models are typically trained on large datasets to learn patterns and relationships in data and can be adapted or fine-tuned for specific tasks.

Some examples include

-        LeNet-5 (a classic convolutional neural network for handwritten digit recognition)

-        AlexNet (for image classification),

-        LSTM (Long Short-Term Memory, used in speech recognition and sentiment analysis)

-        GPT (Generative Pre-trained Transformer, for language translation, chatbots, and content generation like ChatGPT)

-        BERT (Bidirectional Encoder Representations from Transformers, for text classification and question answering)

-        DQN (Deep Q-Network, used in video games and controlling robots)

-        YOLO (You Only Look Once, for detecting objects in images and video streams).

Andrew argues that "adding burdens to foundational model development unnecessarily slows down AI's progress" and suggests that regulation should focus on the application layer (e.g., underwriting software, healthcare applications, self-driving, and chat applications).

In my view, we need to strike a balance between innovation and safety. In 2023, the European Parliament passed the AI Act, which is expected to come into force in 2025. The South Korean government has also announced a $480 million investment over five years to develop AI foundation models.

In this context, companies often face confusion regarding what to allow and what not to do. While it's relatively easy to restrict a particular feature, application, or use case, restrictions at the foundational level can hinder overall progress.

However, certain precautions are essential. For instance, what if a medical dataset is fabricated or hallucinated to appear authentic? As Stephen King, Senior Fellow at HEA and Senior Lecturer in Media at Middlesex University Dubai, points out, "Visual artists legitimately fear that new works created in their style will compete with their copyrighted works or worse, that works will be attributable to them that they did not create."

What is the solution for this?

  • Enterprises need to create their own in-house proprietary AI models and restrict external platforms/models.
  • Developing ethical talent is crucial. While it's easy to recruit high-IQ competitive coders, the challenge is to hire and certify highly ethical AI developers.
  • Researcher/Faculty recruitment and promotions should not be based solely on the number of papers, as AI can now generate research papers. Ethical considerations should also play a significant role.
  • Design Thinking should be incorporated as a mainstream subject in colleges and schools. Philosophy and Ethics are no longer free electives; we need to devise methods to instill these values. This is a key challenge for universities and enterprises. 

The question we face is whether we are prepared to adopt AI without compromising on ethics. This is achievable by including ethics education in the curriculum with the same level of importance as core domain subjects

 

Dr. Ravi Saripalle

Director, Center for Innovation and Incubation, GVPCE

Founder, Inspire to Innovate Storytelling Movement (http://i2iTM.blogspot.com)

https://www.linkedin.com/in/ravisaripalle/

 

No comments:

Post a Comment