Call for Paper

Algorithm optimization (e.g., faster R-CNN, YOLO v2, SSD, etc. for object detection and other neural network applications)
Model compression (e.g., pruning, low rank approximation, etc.)
Deep learning framework (e.g., Caffe2 or TensorFlow framework for mobile and embedded systems)
On-device training (e.g., training neural networks on mobile devices and SoC platforms)
Software code optimization (e.g., zero-aware Winograd convolution)
Hardware-conscious optimization (e.g., quantization, low voltage design)
CPU/DSP/GPU architecture enhancements for neural networks (e.g., co-processor for neural network acceleration)
FPGA/ASIC accelerators for neural networks (e.g., zero-aware low precision CNN/RNN accelerator)
Novel circuit techniques for neural networks (e.g., multiplier-less convolution, neuromorphic circuits, etc.).

The papers should be no more than 4 pages in ACM format. Accepted papers will be presented in oral presentation or poster sessions. All submissions will undergo double-blind reviews. We will hand out the accepted papers only at the workshop without an official proceedings. Thus, the accepted papers, if unpublished, can be submitted to other venues.

Schedule

July 15: Paper submission deadline (submission website will be open in June).

August 15: Paper acceptance notice.

Organizers

Yiran Chen, Duke University

Sungjoo Yoo, Seoul National University

Technical Program Committee (more members are being confirmed)

Yiran Chen (Duke University), Daehyun Kim (Google), Jaeyoun Kim (Facebook), Jian Ouyang (Baidu), Xuehai Qian (USC), Jongsoo Park (Intel), Minsoo Rhu (NVIDIA), Eunsoo Shim (Samsung Electronics), Yu Wang (Tsinghua), Dong Hyuk Woo (Google), Sungjoo Yoo (Seoul National University), Yuan Xie (UCSB).