BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Amihay Hanany (Imperial College London)
DTSTART:20211021T070000Z
DTEND:20211021T081500Z
DTSTAMP:20260422T225637Z
UID:UNISTMath/1
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/UNISTMath/1/
 ">Magnetic Quivers and Physics at Strongly Coupled Quantum Field Theories<
 /a>\nby Amihay Hanany (Imperial College London) as part of UNIST Mathemati
 cal Sciences Seminar Series\n\n\nAbstract\nSupersymmetric gauge theories a
 re an excellent medium for studying problems in both mathematics and physi
 cs. Quiver gauge theories experienced a breakthrough in activity through t
 wo important concepts\, called magnetic quivers and Hasse (phase) diagrams
 .\nThe first helps understanding the physics of strongly coupled gauge the
 ories and exotic theories with tensionless strings. The second gives an in
 valuable information about the phase structure of gauge theories\, in anal
 ogy with phases of water. The talk will review these developments and expl
 ain their significance.\n
LOCATION:https://researchseminars.org/talk/UNISTMath/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Fabian Ruehle (Northeastern University)
DTSTART:20240517T070000Z
DTEND:20240517T080000Z
DTSTAMP:20260422T225637Z
UID:UNISTMath/2
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/UNISTMath/2/
 ">Kolmogorov-Arnold Networks</a>\nby Fabian Ruehle (Northeastern Universit
 y) as part of UNIST Mathematical Sciences Seminar Series\n\n\nAbstract\nWe
  introduce Kolmogorov-Arnold Networks (KANs) as an alternative to standard
  feed-forward neural networks. KANs are based on Kolmogorov-Arnold represe
 ntation theory\, which means for our purposes that we can represent any fu
 nction we want to learn by a weighted sum over basis functions\, taken to 
 be splines. In contrast to standard MLPs\, the function basis of KANs is f
 ixed to be piecewise polynomial rather than a combination of weights and n
 on-linearities\, and we only learn the parameters that control the individ
 ual splines. While this is more expensive than a standard MLP\, KANs have 
 two properties that can offset this cost. First\, KANs can typically work 
 with much fewer parameters. Second\, they exhibit better neural scaling la
 ws\, meaning the error decreases faster when increasing the number of para
 meters as compared to MLPs. Fewer parameters also mean that KANs are much 
 more interpretable\, especially when combined with the sparsification and 
 pruning techniques we introduce. This makes KANs interesting as tools for 
 symbolic regression and for scientific discovery. We discuss an example fr
 om knot theory\, where we could recover (trivial and non-trivial) relation
 s among knot invariants.\n\nRegister for zoom link:\nhttps://us06web.zoom.
 us/meeting/register/tZ MkcOqtrDMuGNAQcMlvp3-MJwcWXVU6fzXl\n
LOCATION:https://researchseminars.org/talk/UNISTMath/2/
END:VEVENT
END:VCALENDAR
