Régely Gábor Balázs: Neural Collapse in Quantised Neural Networks

Önálló projekt, szakmai gyakorlat II

2025/26 II. félév

Témavezetők:
Lukács András (ELTE Matematikai Intézet, MI Kutatócsoport)
Rainie Heck (Matematikai Intézet, MI Kutatócsoport)
Beszámoló:
---
Előadás:
---

Modern neural networks not only achieve impressive accuracy but also reveal surprisingly simple geometric structures in their learned representations. One such phenomenon, called neural collapse, emerges once a network has effectively reached zero training error: the network’s internal representations align into elegant, low-dimensional patterns. In this project, we will study how this phenomenon interacts with quantization, a widely used technique for compressing neural networks by representing weights and activations with low-bit precision. Quantization is not only practical for deploying resource-efficient models but also introduces a natural form of regularization that may reshape the geometry of learned representations. Through this project, you will experimentally investigate how quantized networks behave in the post–zero-loss regime, and you will use geometric and combinatorial tools to understand the structure of both neural collapse and quantization. The work offers a unique opportunity to explore a cutting-edge topic at the intersection of deep learning theory, geometry, and efficient AI models.