Wadhwani School of Data Science and Artificial Intelligence
  • People
    • Faculty
    • Researchers
    • Management
    • Staff
    • Alumni
  • Academics
    • UG Programs
    • PG Programs
    • Research
    • Online Certificate Programs
    • Online Courses
    • Training Programs
  • Opportunities
    • Internships
    • Fellowships
    • Industry Collaborations
    • Faculty Careers
  • News & Events
    • Events
    • News
    • Newsletter
  • Research
    • Overview
    • Themes
    • Projects
    • Research Centres
    • Collaborations
  • Outcomes
    • Publications
    • Preprints
    • Whitepapers
    • Software & Datasets
    • Blogs
  • PhD Candidacy Exam Format
  • Open Positions
  • Grievances
  • For Current Students
  • For Prospective Industry
  • For Prospective Faculty
  • For Prospective Student
  • Upcoming Events
  • Contact

Menu

  • People
    • Faculty
    • Researchers
    • Management
    • Staff
    • Alumni
  • Academics
    • UG Programs
    • PG Programs
    • Research
    • Online Certificate Programs
    • Online Courses
    • Training Programs
  • Opportunities
    • Internships
    • Fellowships
    • Industry Collaborations
    • Faculty Careers
  • News & Events
    • Events
    • News
    • Newsletter
  • Research
    • Overview
    • Themes
    • Projects
    • Research Centres
    • Collaborations
  • Outcomes
    • Publications
    • Preprints
    • Whitepapers
    • Software & Datasets
    • Blogs

Quick Links

  • PhD Candidacy Exam Format
  • Open Positions
  • Grievances
  • For Current Students
  • For Prospective Industry
  • For Prospective Faculty
  • For Prospective Student
  • Upcoming Events
  • Contact
  • Home
  • Tags
  • Explainability

Explainability

LExT: Towards Evaluating Trustworthiness of Natural Language Explanations

Publications

As Large Language Models (LLMs) become increasingly integrated into high-stakes domains, there have been several approaches proposed toward generating natural language explanations. These explanations are crucial for …

Tags: NLP, Trustworthiness, Explainability

Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis

Publications

The accurate automatic segmentation of gliomas and its intra-tumoral structures is important not only for treatment planning but also for follow-up evaluations. Several methods based on 2D and 3D Deep Neural Networks …

Tags: Deep Learning, Explainability

logo footer
IITM Logo

Founded in 2024, the School brings together several faculty with expertise in various areas of Data Science and AI, to work together on impactful problems of direct relevance to the society.

Contact Us

044 2257 8980
office@dsai.iitm.ac.in
6th Floor, New Academic Complex 2,
Indian Institute of Technology Madras,
Chennai-600036, India

Quick Links

  1. For Current Students
  2. Grievances
  3. PhD Candidacy Exam Format
  4. Faculty
  5. UG Programs
  6. PG Programs
  7. Research
  8. Online Certificate Programs
  9. Online Courses
  10. Training Programs
  11. Internships
  12. Fellowships
  13. Industry Collaborations
  14. Upcoming Events
  15. Contact

Wadhwani School of Data Science and Artificial Intelligence | IIT Madras © 2026