In conjunction with ESWEEK 2018, Turin, Italy

October 4, 2018

http://cmalab.snu.ac.kr/HENP2018

The International Workshop on Highly Efficient Neural Processing is a forum for presentations of state-of-the-art research in highly efficient neural processing. The workshop will combine both oral presentations and posters, which include invited talks from Facebook, Samsung Electronics, etc.

Topics

Neural processor architectures
Systems for accelerating neural processing
Memory systems for neural processing
Neural network design/implementation frameworks
Compilers and runtime environments for neural processing
Efficient neural network models and training algorithms

Advance Program

Each oral presentation has 25+5 minutes. Posters are presented during coffee breaks and lunch time as well as the poster session.

Time Session Speaker Title
9-10am Invited Talks 1 Fei Sun, Facebook
Yoonsuk Hyun, Samsung Electronics
The Pitfalls and Guidelines for Mobile ML Benchmarking
Efficient Neural Network and Training Algorithm for Real-Time Object Detection
10-10:30am Coffee & Poster    
10:30-11:00 Invited Talk 2 Jerome Revaud, Naver Labs Europe End-to-end learning of deep visual representations for image retrieval
11:00-12:00pm Keynote Onur Mutlu, ETH Zurich Processing Data Where It Makes Sense for Modern Workloads and Systems: Enabling In-Memory Computation
12-12:30pm Poster    
12:30-2pm Lunch & Poster    
2-3:30pm Invited Talks 3 Soonhoi Ha, SNU
Muhammad Shafique, VUT
Jongeun Lee, UNIST
Sampled Simulation for Fast Performance Estimation of Neural Network Accelerators
Robust Machine Learning: The Road to Intelligent Systems is under Attack!
Toward More Efficient Acceleration of Deep Neural Networks Using Stochastic Computing
3:30-4pm Coffee & Poster    
4-5:00pm Invited Talks 4 Frederic Petro, TIMA
Daehyun Ahn, POSTECH
High Throughput and High Accuracy Classification with Convolutional Ternary Neural Networks
Viterbi-based Pruning for Sparse Matrix with Fixed and High Index Compression Ratio

Poster

The maximum size of poster is A0, i.e., 120cm (width) x 150cm (height).

Affiliation Presenter Title
Peking University Qingcheng Xiao Automated Systolic Array Architecture Synthesis for End-to-End CNN Inference on FPGAs
University of Valencia Leandro Medus Hardware Architecture for Feed-Forward Neural Networks for Handwritten Character Recognition
SNU Bernhard Egger Architectures and Algorithms for User Customization of CNNs
SNU Duseok Kang C-GOOD: C-code Generation Framework for Optimized On-device Deep Learning
University of Seoul Youngmin Yi Acceleration of CNN Inference Using both CPU and GPU in Embedded Platforms
SNU Eunhyeok Park An Energy-Efficient Neural Network Accelerator based on Outlier-Aware Low Precision Computation
SNU Seunghwan Cho McDRAM: Low Latency and Energy-Efficient Matrix Computations in DRAM
SNU Sungjoo Yoo Value-aware Quantization for Training and Inference of Neural Networks
UNIST Jongeun Lee Toward More Efficient Acceleration of Deep Neural Networks Using Stochastic Computing
POSTECH Daehyun Ahn Viterbi-based Pruning for Sparse Matrix with Fixed and High Index Compression Ratio
TIMA Frederic Petro High Throughput and High Accuracy Classification with Convolutional Ternary Neural Networks
Facebook Fei Sun The Pitfalls and Guidelines for Mobile ML Benchmarking
University of Pittsburgh Jingtong Hu Heterogeneous FPGA-based Cost-Optimal Design for Timing-Constrained CNNs
SNU Duseok Kang Comparison of Convolutional Networks for on-Device Object Detection

Organizers

Yiran Chen, Duke Univ., yiran.chen@duke.edu

Sungjoo Yoo, Seoul National Univ., sungjoo.yoo@gmail.com

Technical Program Committee

Yiran Chen (Duke University), Daehyun Kim (Samsung), Jaeyoun Kim (Google), Jian Ouyang (Baidu), Xuehai Qian (USC), Jongsoo Park (Facebook), Minsoo Rhu (KAIST), Eunsoo Shim (Samsung Electronics), Fei Sun (Facebook), Yu Wang (Tsinghua), Dong Hyuk Woo (Google), Sungjoo Yoo (Seoul National University), Yuan Xie (UCSB).

Previous edition

HENND 2017, http://cmalab.snu.ac.kr/HENND2017