Talks
2025
Date | Presenter | Team | Title |
---|---|---|---|
9th May 2025 | Zhiqiang (Walkie) Que | UK | Trustworthy Deep Learning Acceleration with Customizable Design Flow Automation |
18th Apr 2025 | Jiale Yan | Japan | BingoGCN: Towards Scalable and Efficient GNN Acceleration with Fine-Grained Partitioning and SLT |
21st Mar 2025 | Zehuan Zhang | UK | Accelerating MRI Uncertainty Estimation with Mask-based Bayesian Neural Network |
7th Mar 2025 | Yasuyuki Okoshi | Japan | Unlocking the Potential of Extremely Low-Bit Sparse Transformers through Adaptive Supermasks |
12th Feb 2025 | Mark Chen | UK | Progressive Mixed-Precision Decoding for Efficient LLM Inference |
31st Jan 2025 | Daichi Fujiki | Japan | Towards Multi-Layer Processing-in-Memory Systems for General Applications |
31st Jan 2025 | Hongxiang Fan | UK | Unified Butterfly Acceleration Engine |
10th Jan 2025 | Yuta Nagahara | Japan | Toward Efficient Sparse Computing |
10th Jan 2025 | Zehuan Zhang | UK | Hardware-Aware Neural Dropout Search for Reliable Uncertainty Prediction on FPGA |
2024
Date | Presenter | Team | Title |
---|---|---|---|
18th Dec 2024 | Junnosuke Suzuki | Japan | Progressive Bit-Precision CNNs & Accelerator |
18th Dec 2024 | Zhiqiang (Walkie) Que | UK | Advancing Deep Learning Hardware for Scientific Applications |
28th Nov 2024 | Hikari Otsuka | Japan | Partially Frozen Random Network Contain Compact Strong Lottery Tickets |
28th Nov 2024 | Gabriel Figueiredo | UK | Artisan Meta-programming |
15th Nov 2024 | Yasuyuki Okoshi | Japan | WhiteDwarf: 12.24 TFLOPS/W 40 nm Versatile Neural Inference Engine for Ultra-Compact Execution of CNNs and MLPs Through Triple Unstructured Sparsity Exploitation and Triple Model Compression |
15th Nov 2024 | Hongxiang Fan | UK | When Monte-Carlo Dropout Meets Multi-Exit |