Thursday 30 November 2023

The Power of Dark Colour! Interesting Trend in 2024 UI/UX Design! Read this Story!

Dear Friends and Students
 
Lord Krishna is depicted in Krishnavarna/Shyam, a specific shade of Dark Blue! The color dark blue reflects His divine nature and His embodiment of the universe's boundless potential. It symbolizes his transcendence of earthly limitations and his connection to the ultimate reality. 
 
We often ignore such deep meanings but ultimately prefer Swetha Varnam! However, in the modern gadget world, a white background is more costly from an energy perspective. 
 
Most modern displays, such as LCD and OLED screens, use a backlight to illuminate the pixels from behind. This backlight is typically a bright white light, and it consumes a significant amount of power. When the background is white, the backlight has to work harder to produce the desired brightness, which consumes more energy.
 
On LCD displays, each pixel consists of three subpixels: red, green, and blue. To produce a white pixel, all three subpixels must be lit to their full brightness. This consumes more power. 
 
To our surprise, Black needs 0%-pixel power, Dark Grey 25%, Medium grey 50%, Light grey 75%, white needs 100%! More than that, dark mode can also have other benefits, such as reducing eye strain and improving sleep quality.                                           
 
Beyond all, it enhances the accessibility. Dark mode can also make it easier for people with vision impairments to use digital products. This is because dark mode can increase the contrast between the text and the background, which can make it easier to read.
 
Now Google, Android, Apple, X, Reddit, etc offers dark mode! Hence, dark mode is a versatile and beneficial UI/UX trend that is likely to continue to grow in popularity in 2024!!

In 2024, code will be generated by Code Assistants/CoPilots!! But, Being UX/UI designers, students should learn color theory, layout elements on a screen, use typography effectively, and create a consistent user experience, CSS selectors, properties, and values, understand how to use color contrast, avoid using patterns that can cause seizures, and provide alternative text for images (Knowledge of accessibility for visually impaired)!!
 
Teachers of CSE and CSE Students, your hat is going to change shortly! Do you have a coding assistant account? If not, get ready!! I suggest you go through this video https://lnkd.in/gDP2z7ri
 
What are you waiting for? Computer Science Learning is no more "How to Write New Code on your own" but "How do you leverage the existing code, tweak, and re-write new code for your designated task"? Prepare Yourself for 2024 Coding!
 
Ravi Saripalle

Tuesday 28 November 2023

Patent Edu Series # 4- Wet Grinder! How did the City of Coimbatore become a GI Tag holder for this grinding cause? Listen to this Story!!

Dear Friends

Have you ever used a grinding stone, grounded or fitted to the earth? In my childhood, we used to do this! It was not fun; I used to feel the weight, but there was no other option if you wanted to taste your favorite, yummy coconut chutney!"

Today nobody does this right? If you got connected by this time, listen to this story below!!

Sunday 19 November 2023

Patent Edu Series # 3- Cricket Bat and Free Standing Cricket Wicket with Flexible or Detachable Stumps

Patent Edu Series # 3- Cricket Bat and Free Standing Cricket Wicket with Flexible or Detachable Stumps

World Cup Season is over! Understand the Technology and Patents behind the Game! Listen to this

Happy Patenting

https://youtu.be/jnA49-02br4
https://www.instagram.com/p/Cz1fiz7OheY/

Ravi Saripalle



Wednesday 8 November 2023

Wednesday 1 November 2023

Critical Question for Governments and Global Leaders: AI Foundation Models or Application Layer Regulations?

Dear Friends and University Students,

Today, Andrew Ng, Chairman of Coursera, Founder & CEO of Landing AI, Founder of DeepLearning.AI, Managing General Partner at AI Fund, with a B.S. from Carnegie Mellon University, M.S. from MIT, and a Ph.D. from the University of California, Berkeley, as well as Director of the Stanford AI Lab, raised an important issue regarding the "Risk of regulations for companies developing foundational AI models in the name of National Security".

You might be wondering, what is a Foundational AI Model? These models are typically trained on large datasets to learn patterns and relationships in data and can be adapted or fine-tuned for specific tasks.

Some examples include

-        LeNet-5 (a classic convolutional neural network for handwritten digit recognition)

-        AlexNet (for image classification),

-        LSTM (Long Short-Term Memory, used in speech recognition and sentiment analysis)

-        GPT (Generative Pre-trained Transformer, for language translation, chatbots, and content generation like ChatGPT)

-        BERT (Bidirectional Encoder Representations from Transformers, for text classification and question answering)

-        DQN (Deep Q-Network, used in video games and controlling robots)

-        YOLO (You Only Look Once, for detecting objects in images and video streams).

Andrew argues that "adding burdens to foundational model development unnecessarily slows down AI's progress" and suggests that regulation should focus on the application layer (e.g., underwriting software, healthcare applications, self-driving, and chat applications).

In my view, we need to strike a balance between innovation and safety. In 2023, the European Parliament passed the AI Act, which is expected to come into force in 2025. The South Korean government has also announced a $480 million investment over five years to develop AI foundation models.

In this context, companies often face confusion regarding what to allow and what not to do. While it's relatively easy to restrict a particular feature, application, or use case, restrictions at the foundational level can hinder overall progress.

However, certain precautions are essential. For instance, what if a medical dataset is fabricated or hallucinated to appear authentic? As Stephen King, Senior Fellow at HEA and Senior Lecturer in Media at Middlesex University Dubai, points out, "Visual artists legitimately fear that new works created in their style will compete with their copyrighted works or worse, that works will be attributable to them that they did not create."

What is the solution for this?

  • Enterprises need to create their own in-house proprietary AI models and restrict external platforms/models.
  • Developing ethical talent is crucial. While it's easy to recruit high-IQ competitive coders, the challenge is to hire and certify highly ethical AI developers.
  • Researcher/Faculty recruitment and promotions should not be based solely on the number of papers, as AI can now generate research papers. Ethical considerations should also play a significant role.
  • Design Thinking should be incorporated as a mainstream subject in colleges and schools. Philosophy and Ethics are no longer free electives; we need to devise methods to instill these values. This is a key challenge for universities and enterprises. 

The question we face is whether we are prepared to adopt AI without compromising on ethics. This is achievable by including ethics education in the curriculum with the same level of importance as core domain subjects

 

Dr. Ravi Saripalle

Director, Center for Innovation and Incubation, GVPCE

Founder, Inspire to Innovate Storytelling Movement (http://i2iTM.blogspot.com)

https://www.linkedin.com/in/ravisaripalle/