Saturday 23 September 2023

"Preparing Coders for Tomorrow: The Evolution of Code Comprehension in Generative AI”! Computer Science Faculty and IT Recruiters' Dilemma!

Dear Friends and Students,

This article is highly debatable, much like the age-old question of whether the chicken or the egg came first. Twenty-seven years ago, during my academic journey, I encountered a significant challenge. As part of a lab exam, we were tasked with writing a quicksort algorithm – a sorting technique used in computer programming to arrange numbers in ascending or descending order. While I understood the logic and could outline the steps in plain English, I struggled to pass the test case when it came to implementing it in C++. Regrettably, this remains a personal weakness to this day.

The mere recollection of that day still makes me break into a sweat. I grappled with the code, but the test case remained unyielding. Finally, in the last frantic 10 minutes, I transcribed the code onto paper and submitted it to the examiner. An anxious moment followed as she asked, "Will this code work?" My response, laden with uncertainty, ranged from "Yes" to "No" and "Maybe." Ultimately, I lost 15 marks due to my inability to memorize the syntax, even though I grasped the underlying logic and had written pseudo code.

It was this fear that led me to pursue a career in Pre-Sales at Wipro. However, my ability to comprehend and adapt others' well-written code to my context allowed me to excel in managing projects and even take on the role of Dev Manager. Nevertheless, if tasked with passing a test case in an examination-style coding assessment today, I might still falter. In fact, I have faced such failures in the past, and I am confident that many students studying computer science degrees share similar experiences.

The landscape of coding education is undergoing a profound transformation. Last year, when I boldly proclaimed at a conference that "Generative AI would write code in the future," I faced backlash. I also asserted that there is no need to teach "How to write code"; instead, we should focus on imparting "Code Comprehension Skills and Task Comprehension skills." These ideas were met with disagreement.

What may come as a surprise is the following statistic: "GitHub Copilot, an AI-powered code completion tool, is now being used to generate an average of 46% of the code that developers are writing. This marks a substantial increase from the 27% generated by Copilot in June 2022."

Copilot has gained popularity among over 1 million developers and has generated more than 3 billion accepted lines of code. With this rate of adoption, one must consider the necessary changes in programming education at universities and in the assessment tests for fresh graduates in coding contests.

If you visit codeium.com/playground, you can write a simple prompt (a problem statement in English), press "Enter," and watch as AI generates code for you. It supports coding in over 70 languages. In light of these developments, I propose a renewed focus when teaching and assessing computer science students:

Code Comprehension Skills (40%): This skill set involves reading and understanding code. It empowers students to maintain, extend, debug existing code, and craft new code that is efficient, reliable, and maintainable. Encourage students to read code regularly, practice coding with AI assistance, employ debugging tools (e.g., Codecheck), consult the documentation, and seek answers from AI tools like Bard, Bing chat, or ChatGPT.

Task Comprehension Skills (40%): These skills encompass understanding requirements, breaking tasks into manageable steps, and identifying the necessary data structures and algorithms.

Syntax and Writing Code (20%): The significance of syntax and writing code has evolved, with my recommendation being to reduce its weight to 20%. We are entering an era of Generative AI, where the emphasis shifts towards comprehension over rote memorization.

Does this approach make sense? Will it work? My intuition says, "Yes, it will!"

Best regards,

Ravi Saripalle

 

Thursday 14 September 2023

The Thirsty ChatGPT! Cultivate Responsible Prompt Engineering! Be Accountable for your Prompts and behind Water footprint!

Dear Friends and Students,

During a recent visit to a friend's home, I witnessed an interesting scenario. A 10-year-old boy was playfully engaging with ChatGPT, and his parents were thrilled to see him embrace this AI tool. While technology adoption is a positive development, I believe it's important to educate the entire family about a crucial aspect of these interactions.

In technical terms, our interactions with ChatGPT through the textbox are referred to as "prompts." In this instance, the boy asked a simple question, "How are you?" and received a response from ChatGPT: "I'm just a computer program, so I don't have feelings, but I'm here to help you with any questions or tasks you have. How can I assist you today?" This exchange was fun and entertaining, but it's essential to understand that for every 20-50 such conversations or prompts, ChatGPT consumes approximately half a litre of water.

What may surprise you is that ChatGPT's water consumption is related to the cooling of the data center machines that power the AI. In fact, Microsoft's global water usage increased by 34 percent from 2021 to 2022, reaching nearly 1.7 billion gallons (source: www.businesstoday.in).

It's not just ChatGPT; every AI-powered data center has a significant water footprint. Consider the numbers: Google Bard has about 1,00,000 daily active users, while ChatGPT boasts over 10 crore active users. Microsoft Bing Chat has an estimated 50,000 daily active users, Meta Llama 2 has around 25,000, Claude has roughly 10,000, and GitHub CoPilot has about 5,000 daily active users.

Starting with the estimated daily active users of ChatGPT and Bard, if each user generates an average of 10 prompts per day, the total number of prompts from these two engines alone would reach 130 million per day. Adding in prompts from other competitors like Microsoft Bing Chat, Meta Llama 2, Claude, and GitHub CoPilot, the daily total exceeds 135 million. To put this into perspective, 130 million prompts require a staggering 65,00,000 liters of water. Assuming an average person consumes 3 liters of water per day, this amount could sustain approximately 21,66,667 people daily.

I've been asked whether AI models like ChatGPT are sustainable. The answer is Yes/No, depending on the context. AI can be sustainable when used for vital tasks. For instance, when AI analyses medical images such as X-rays, CT scans, and MRIs to help doctors detect diseases like cancer, fractures, and neurological disorders, it greatly enhances healthcare accuracy and efficiency. However, we might consider delaying AI-powered meme generators, AI toys, and entertainment robots to reduce the water footprint. We can also postpone AI chatbots for casual conversations and AI virtual assistants for entertainment and convenience.

In conclusion, AI developers and users should be transparent about their products' water footprint. Users must be aware of the environmental impact of their choices. To address the water footprint of AI, developers, users, policymakers, and stakeholders must collaborate and take responsible actions.

Warm regards,

Ravi Saripalle