Blog

10 IIIT-H Projects That Clinched ANRF ARG Awards

Securing the Digital Frontier: Inside the SyPy Research Group

Life on Campus

In the news

6 March 2026
Among the 15,700 proposals submitted nationwide, ten research projects from the International Institute of Information Technology Hyderabad (IIIT-H) emerged as winners of the prestigious Advanced Research Grant (ARG) – the Anusandhan National Research Foundation’s (ANRF) flagship funding scheme – an extraordinary showing that underscores the institute’s growing influence in cutting-edge science and technology. The Advanced Research Grant by ANRF, India’s national funding body for research and innovation that has been set up by the Government of India, is designed to support ambitious, investigator-driven research projects led by established researchers working on novel, high-impact ideas. From foundational research to real-world innovation, the selected projects spotlight the depth, diversity, and ambition of IIIT-Hyderabad’s research ecosystem. The selected projects cover areas including quantum computing, robotics, artificial intelligence, communication systems, speech technology and climate research.
As artificial intelligence reshapes industries and digital finance redraws economic boundaries, the risks beneath our connected world are growing just as fast. At IIIT-H, the Security and Privacy (SyPy) Research Group is working behind the scenes to uncover hidden vulnerabilities, defend emerging technologies, and build the foundations of digital trust. “We live in an online world,” says Prof. Ankit Gangwal, continuing, “Our savings move through digital wallets. Our faces unlock our phones. Our conversations are filtered through machine learning systems that predict what we want before we type it. Every swipe, tap, and transaction depends on layers of invisible code. But what happens when that code is compromised?” Prof. Gangwal’s group, is not just asking exactly that question but working relentlessly to answer it. “To secure the future, we must first understand the vulnerabilities of the present,” he remarks. Security failures rarely announce themselves loudly at first. They hide in edge cases, in overlooked assumptions, in code that “should work.”
In an era where large language models dazzle us with fluency, confident reasoning, and near-human responses, Prof. Manish Shrivastava urges caution by pulling back the curtain on AI’s “illusion of reasoning,” and makes a compelling case for smarter data, smaller models, and a more thoughtful future for AI, especially in the Indian context. Prof. Manish Shrivastava’s research philosophy can be best described with two ‘Rs’: “R for research and R for rabbit holes.” Explaining that there are three types of research, the goal-oriented kind which is focused and socially impactful, the opportunistic kind which jumps into emerging gaps in a field and the exploratory type, driven by intellectual curiosity, Prof. Shrivastava elaborates that most of his work falls into the third category. It’s these rabbit holes that have led him deep into one of today’s most urgent questions: Are large language models (LLMs) actually doing what we think they are? Anybody who is using an large language model (LLM) treats it as an intelligent entity. But for Prof. Shrivastava, it is “facts plus language”.