Papers

ReferenceAbstractPDF
Ito, H., Otsuka, H., Yasudo, R., Que, Z., Coutinho, J. G. F., Fujiki, D., Motomura, M., Guo, C., & Luk, W. (2026). Memory-efficient and trustworthy neural networks via random seed-based design. In IEEE Access, vol. 14, pp. 19424-19439, 2026.This paper introduces a new way to build AI models that are both efficient and reliable, even when running on limited hardware. By using techniques like Strong Lottery Tickets (SLT) and carefully controlling randomness in the model design, the approach reduces model size and energy use while keeping predictions accurate and trustworthy. The results show that models can be made much smaller without losing performance, and can even become better at expressing uncertainty.link
Chen, H. M., Lu, G., Okoshi, Y., Mo, Z., Motomura, M., & Fan, H. (Dec 2025). Rethinking Optimal Verification Granularity for Compute-Efficient Test-Time Scaling. In Advances in Neural Information Processing Systems, Dec 2025.This paper studies how to improve test-time scaling (TTS), a technique that boosts the reasoning ability of AI models by using more computation during inference. The authors focus on how often the model’s intermediate answers should be checked, proposing a flexible method that adjusts this verification frequency instead of only checking final results or every single step. Their approach shows that AI systems can achieve better accuracy while using significantly less computational resources, making reasoning both more effective and efficient.link
Que, Z., Fan, H., Coutinho, J.G.F, Guo, C., Luk, W., Yasudo, R., & Motomura, M. (2025, May). Trustworthy Deep Learning Acceleration with Customizable Design Flow Automation. In Proceedings of the 15th International Symposium on Highly Efficient Accelerators and Reconfigurable Technologies, May 2025.This paper focuses on making advanced AI models run efficiently on limited hardware without sacrificing reliability. It introduces an automated method that balances speed, accuracy, and how trustworthy the model’s predictions are. The results show that this approach can significantly improve performance and energy efficiency compared to existing solutions.link