Skip to main content

Science and Technology

Artificial Intelligence and Machine Learning

Computer Science: Networking, Telecom, AI/ML, Big Data, Cloud/Edge Computing, IoT, Blockchain, Digital Currency, VR/AR, OTT, Social Media

Paper II · Unit 2 Section 4 of 12 0 PYQs 31 min

Public Section Preview

Artificial Intelligence and Machine Learning

3.1 AI — Foundations

Artificial Intelligence is the simulation of human cognitive functions (learning, reasoning, problem-solving, perception, language understanding) by machines.

History milestones:

  • 1950: Alan Turing proposes the Turing Test — if a machine's responses are indistinguishable from a human's, it is "intelligent"
  • 1956: John McCarthy coins the term "Artificial Intelligence" at the Dartmouth Conference
  • 1997: IBM Deep Blue beats world chess champion Garry Kasparov
  • 2011: IBM Watson wins Jeopardy! — NLP milestone
  • 2016: Google AlphaGo defeats Lee Sedol at Go — first AI to beat a top-ranked professional in a game with ~10¹⁷⁰ possible states
  • 2022–2023: ChatGPT (OpenAI), GPT-4, Gemini — Large Language Models reach mass consumer adoption

3.2 Machine Learning

Machine Learning (ML) is a subset of AI where systems improve their performance through experience (data), without being explicitly programmed.

Types of ML:

Type What It Does Algorithm Examples Applications
Supervised Learning Learns from labelled training data Linear regression, Decision Trees, SVM, Neural Networks Spam detection, medical diagnosis, price prediction
Unsupervised Learning Finds hidden patterns in unlabelled data K-means clustering, PCA, Autoencoders Customer segmentation, anomaly detection, recommendation
Reinforcement Learning Agent learns by receiving rewards/penalties Q-learning, Policy Gradient AlphaGo, robotics, autonomous vehicles, game AI
Semi-supervised Mix of labelled and unlabelled data Self-training, GANs Image annotation, text classification

3.3 Deep Learning and Neural Networks

Artificial Neural Network (ANN): Inspired by the brain's neurons. Consists of layers of interconnected nodes — input layer → hidden layers → output layer. Each connection has a weight adjusted during training (backpropagation).

Deep Learning uses neural networks with many hidden layers (10–1,000+ layers). Enabled by three factors:

  • Large datasets
  • GPU computing power
  • Algorithmic advances (ReLU activation, dropout regularisation, batch normalisation)

Key deep learning applications:

  • Computer Vision: Image classification (ResNet), object detection (YOLO), face recognition (FaceNet), medical image analysis (detecting cancer on CT scans with >90% accuracy)
  • NLP (Natural Language Processing): Machine translation (Google Translate), sentiment analysis, chatbots; Transformer architecture (2017, Google) → BERT, GPT series
  • Generative Models: GANs (Generative Adversarial Networks, 2014, Ian Goodfellow) — two competing neural networks → realistic synthetic images, deepfakes; Diffusion models (Stable Diffusion, DALL-E 3, Midjourney) — state-of-the-art image generation

3.4 Large Language Models (LLMs) and Generative AI (PYQ 2024 — Q27)

Large Language Models are neural networks trained on massive text corpora (hundreds of billions of words) to predict and generate text. They use the Transformer architecture.

Model Developer Parameters Notable Feature
GPT-4 OpenAI ~1.8 trillion (est.) Multimodal (text + images); powers ChatGPT
Gemini Ultra Google DeepMind Not disclosed Multimodal; first to exceed GPT-4 on benchmarks
Claude 3 Anthropic Not disclosed Emphasis on safety and helpfulness
Llama 3 Meta 70B–405B Open-source; enables custom deployment
Krutrim Ola (India) Not disclosed India's first LLM; 22 Indian languages

Concerns about LLMs:

  • Hallucination: Generating plausible-sounding but factually incorrect information
  • Bias: Reflecting biases in training data (racial, gender, cultural)
  • Deepfakes and misinformation: LLM-generated fake news, synthetic voices, political disinformation
  • Job displacement: Estimated 85 million jobs may be displaced by AI by 2025 (WEF 2020), though 97 million new roles expected
  • AI Safety: Concern about misaligned AI systems developing unintended objectives (AGI risk)

AI Governance:

  • EU AI Act (2024): World's first comprehensive AI regulation; risk-based — banned AI (social scoring, real-time biometric surveillance), high-risk (medical, judicial), general purpose AI (LLMs) with transparency requirements
  • G20 AI Principles: Human-centred, ethical, inclusive, transparent; endorsed 2019, operationalised under India's G20 Presidency 2023
  • India AI Mission (2024): Rs 10,371 crore for AIRAWAT, LLMs for Indian languages, AI skills (belongs to Topic 71 for India-specific detail)