Sunday, 28 February 2021

50 Years Journey!! The Story of the World of Automation and AI! Nov 15, 1971,” Announcing a New Era of Integrated Electronics”, The First Ad for Microprocessor (Intel 4004) to the recent Cerebras Systems’s the world's largest AI purpose Computer Chip! Who are to be given credit? Read this Interesting Perspective?

Dear Friends and Students

The seed for automation celebrates 50 years!! Ted Hoff’s team from Intel developed the 4004 a general-purpose processor that could be used across many devices. The credit goes to Intel! Beyond, we need to appreciate another company, requested Intel to develop this chip! That is Busicom, a Japanese calculator maker (Nippon Calculating Machine Corp)!.

Busicom was a Japanese company that owned the rights to Intel's first microprocessor, the Intel 4004, which they created in partnership with Intel in 1970. Had Busicom not allowed for general sale, probably things would have been different, right! Busicom owned the exclusive rights to the design and its components in 1970 but shared them with Intel in 1971!! This year we are celebrating 50 years of this chip journey!!

I feel credit goes to both the founders of Busicom and Intel: Tadashi and Robert!

Tadashi Sasaki was a Japanese engineer, a founding member of Busicom, driving the development of the Intel 4004 microprocessor, and later driving Sharp into the LCD calculator market. Robert Norton Noyce, nicknamed "The Mayor of Silicon Valley", was an American physicist who co-founded Fairchild Semiconductor in 1957 and Intel Corporation in 1968.

While we are proud of this legacy, recently Cerebras unveiled a chip that accelerates deep learning. They developed Wafer-Scale Engine (WSE) – the largest chip ever built! It is 56x larger than any other chip, the WSE delivers more compute, more memory, and more communication bandwidth. The performance of a room full of servers into a single unit the size of a dorm room mini-fridge!! What a miniature version?

Andrew is co-founder and CEO of Cerebras Systems. Andrew holds a BA and an MBA from Stanford University. Gary is co-founder and CTO of Cerebras Systems. Gary holds more than 50 patents. The perfect combination of Management and Technology!!

Let’s see the dimensions of this chip: Sparse Linear Algebra Compute (SLAC) Cores 400,000, On-chip Memory (SRAM), 18 GB SRAM, Memory Bandwidth 9.6 PB/sec, Interconnect Bandwidth100 Pb/sec, System I/O 1.2 Tb/s and Dimensions 15 rack units.

Dear Friends and Students (Machine Learning Lovers)

The Cerebras software platform integrated with popular machine learning frameworks like TensorFlow and PyTorch. It also has a programmable C++ interface that allows researchers to extend the platform and develop custom kernels.

I liked their tagline!! “Explore More Ideas in Less Time. Reduce the Cost of Curiosity”. I am really amazed of this statement- Reduce the Cost of Curiosity!

This year, many engineering colleges in India started CSM and CSD courses (CSE- AL-ML) and (CSE- DataScience). Cerebras Systems announced an Internship. Please read the following requirements, which helps you to plan your AI-ML skills (cerebras.net/careers/).

The Role

Cerebras is developing both novel algorithms to accelerate training of the existing neural network architectures, as well as new, custom network architectures for the next generation of deep learning accelerators. For this internship position, we are looking for hands-on researchers who can:

Take an algorithm from inception, to TensorFlow or PyTorch implementation, to results competitive with state-of-the-art on benchmarks such as ImageNet classification.

  • ·         Develop algorithms for training and inference with sparse weights and sparse activations.
  • ·         Develop algorithms for training at unprecedented levels of scale and parallelism.
  • ·    Publish results in Machine Learning conferences and company messaging, like blog posts and white papers.

Skills & Qualifications

  • ·         Publications in Machine Learning such as supervised, unsupervised, or reinforcement learning. Statistical modeling such as generative modeling and probabilistic modeling.
  • ·       Graduate and undergraduate students with a background in Deep Learning and Neural Networks
  • ·         Experience with deep learning models such as Transformers, RNNs, and CNNs for language modeling, speech recognition, and computer vision
  • ·         Experience with high-performance machine learning methods such as distributed training, parameter server, synchronous and asynchronous model parallelism.
  • ·         Experience with 16 bit / low precision training and inference, using half-precision floating point, fixed point.
  • ·         Experience with model compression and model quantization.

Happy Deep Learning!! Raise your Bar

Your Well-wisher

Ravi Saripalle

Join Inspire to Innovate Storytelling Movement (i2itm.blogspot.com)

Source Inspiration:

www.bbc.com/news/technology-49395577

https://www.hindustantimes.com/photos/photos-how-the-microprocessor-changed-our-lives-for-you-101614411430009-8.html

No comments:

Post a Comment