International Workshop on Highly Efficient Neural Processing (HENP)
In conjunction with ESWEEK 2018, Turin, Italy
October 4, 2018
http://cmalab.snu.ac.kr/HENP2018
The International Workshop on Highly Efficient Neural Processing is a forum for presentations of state-of-the-art research in highly efficient neural processing. The workshop will combine both oral presentations and posters, which include invited talks from Facebook, Samsung Electronics, etc.
Topics
We invite submissions of papers related to (but not limited to) the following topics:
Neural processor architectures |
Systems for accelerating neural processing |
Memory systems for neural processing |
Neural network design/implementation frameworks |
Compilers and runtime environments for neural processing |
Efficient neural network models and training algorithms |
Advance Program
Each oral presentation has 25+5 minutes. Posters are presented during coffee breaks and lunch time as well as the poster session.
Time | Session | Chair | Speaker | Title |
9-10am | Invited Talks 1 | Sungjoo Yoo, SNU | Fei Sun, Facebook Frederic Petro, TIMA |
The Pitfalls and Guidelines for Mobile ML Benchmarking High Throughput and High Accuracy Classification with Convolutional Ternary Neural Networks |
10-10:30am | Coffee & Poster | |||
10:30-11:00 | Invited Talk 2 | Yiran Chen, Duke Univ. | Jerome Revaud, Naver Labs Europe | Visual Search in Large Image Collections |
11:00-12:00pm | Keynote | Onur Mutlu, ETH Zurich | Processing Data Where It Makes Sense in Modern Computing Systems: Enabling In-Memory Computation | |
12-12:30pm | Poster | |||
12:30-2pm | Lunch & Poster | |||
2-3:30pm | Invited Talks 3 | Fei Sun, Facebook | Soonhoi Ha, SNU Muhammad Shafique, VUT Jongeun Lee, UNIST |
Sampled Simulation for Fast Performance Estimation of Neural Network Accelerators Robust Machine Learning: The Road to Intelligent Systems is under Attack! Toward More Efficient Acceleration of Deep Neural Networks Using Stochastic Computing |
3:30-4pm | Coffee & Poster | |||
4-5:00pm | Invited Talks 4 | Jingtong Hu, Univ. of Pittsburgh | Yoonsuk Hyun, Samsung Electronics Daehyun Ahn, POSTECH |
Efficient Neural Network and Training Algorithm for Real-Time Object Detection Viterbi-based Pruning for Sparse Matrix with ixed and High Index Compression Ratio |
Poster
The maximum size of poster is A0, i.e., 120cm (width) x 150cm (height).
# | Affiliation | Presenter | Title |
1 | Peking University | Qingcheng Xiao | Automated Systolic Array Architecture Synthesis for End-to-End CNN Inference on FPGAs |
2 | University of Valencia | Leandro Medus | Hardware Architecture for Feed-Forward Neural Networks for Handwritten Character Recognition |
3 | SNU | Bernhard Egger | Architectures and Algorithms for User Customization of CNNs |
4 | SNU | Duseok Kang | C-GOOD: C-code Generation Framework for Optimized On-device Deep Learning |
5 | University of Seoul | Kyungchul Park | Acceleration of CNN Inference Using both CPU and GPU in Embedded Platforms |
6 | SNU | Eunhyeok Park | An Energy-Efficient Neural Network Accelerator based on Outlier-Aware Low Precision Computation |
7 | SNU | Seunghwan Cho | McDRAM: Low Latency and Energy-Efficient Matrix Computations in DRAM |
8 | SNU | Sungjoo Yoo | Value-aware Quantization for Training and Inference of Neural Networks |
9 | UNIST | Jongeun Lee | Toward More Efficient Acceleration of Deep Neural Networks Using Stochastic Computing |
10 | POSTECH | Daehyun Ahn | Viterbi-based Pruning for Sparse Matrix with Fixed and High Index Compression Ratio |
11 | TIMA | Frederic Petro | High Throughput and High Accuracy Classification with Convolutional Ternary Neural Networks |
12 | Fei Sun | The Pitfalls and Guidelines for Mobile ML Benchmarking | |
13 | University of Pittsburgh | Jingtong Hu | Heterogeneous FPGA-based Cost-Optimal Design for Timing-Constrained CNNs |
14 | SNU | Duseok Kang | Comparison of Convolutional Networks for on-Device Object Detection |