diff --git a/_config.yml b/_config.yml
index 12bf649a7ba..79b06e6765c 100644
--- a/_config.yml
+++ b/_config.yml
@@ -6,9 +6,9 @@
# `jekyll serve`. If you change this file, please restart the server process.
# Site Settings
-title : "Lorem ipsum"
-description : "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet. "
-repository : "RayeRen/acad-homepage.github.io"
+title : "Changchun Zhou"
+description : "Ph.D. Candidate at Peking University, Interested in Energy-Efficient AI Accelerator for Edge Computing and Algorithm-Hardware Co-Design"
+repository : "zhouchch3.github.io/changchunzhou/"
google_scholar_stats_use_cdn : true
# google analytics
@@ -21,14 +21,14 @@ baidu_site_verification : # get baidu_site_verification from https://ziyuan.ba
# Site Author
author:
- name : "Lorem ipsum"
- avatar : "images/android-chrome-512x512.png"
- bio : "Lorem ipsum College"
+ name : "Changchun Zhou (周长春)"
+ avatar : "images/changchunzhou_half.jpg"
+ bio : "Peking University"
location : "Beijing, China"
employer :
pubmed :
- googlescholar : "https://scholar.google.com/citations?user=YOUR_GOOGLE_SCHOLAR_ID"
- email : "Lorem@ipsum.com"
+ googlescholar : "https://scholar.google.com/citations?user=tiWMI1QAAAAJ"
+ email : "zhouchch@pku.edu.cn"
researchgate : # e.g., "https://www.researchgate.net/profile/yourprofile"
uri :
bitbucket :
@@ -37,7 +37,7 @@ author:
flickr :
facebook :
foursquare :
- github : # e.g., "github username"
+ github : "zhouchch3"
google_plus :
keybase :
instagram :
diff --git a/_pages/about.md b/_pages/about.md
index 1e8935ec9ca..56afd2a8fd7 100644
--- a/_pages/about.md
+++ b/_pages/about.md
@@ -17,23 +17,113 @@ redirect_from:
-Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet. Suspendisse condimentum, libero vel tempus mattis, risus risus vulputate libero, elementum fermentum mi neque vel nisl. Maecenas facilisis maximus dignissim. Curabitur mattis vulputate dui, tincidunt varius libero luctus eu. Mauris mauris nulla, scelerisque eget massa id, tincidunt congue felis. Sed convallis tempor ipsum rhoncus viverra. Pellentesque nulla orci, accumsan volutpat fringilla vitae, maximus sit amet tortor. Aliquam ultricies odio ut volutpat scelerisque. Donec nisl nisl, porttitor vitae pharetra quis, fringilla sed mi. Fusce pretium dolor ut aliquam consequat. Cras volutpat, tellus accumsan mattis molestie, nisl lacus tempus massa, nec malesuada tortor leo vel quam. Aliquam vel ex consectetur, vehicula leo nec, efficitur eros. Donec convallis non urna quis feugiat.
+I am currently a Ph. D. candidate in the School of integrated Circuits, Peking University, Beijing, China, supervised by Prof. Hailong Jiao. I received the Bachelor degree of Microelectronics Science and Engineering from Sun Yat-sen University, Guangzhou, China, in 2018. My research interest is energy-efficient AI chips for edge computing. You can find more information through my CV. I am currently applying for a post-doctoral fellow, and if you are interested in me, please feel free to contact me at any time.
+E-mail: zhouchch@pku.edu.cn | WeChat: zhou1562786
-My research interest includes neural machine translation and computer vision. I have published more than 100 papers at the top international AI conferences with total google scholar citations 260000+ (You can also use google scholar badge
).
+
+# 📖 Educations
+- *2018.09 - Present*, Doctor of Philosophy in Microelectronics and Solid-State Electronics, Peking University, Beijing, China. **GPA: 3.6/4.0, 1/157 in Comprehensive Ranking** . Thesis Title: Research on On-Chip Neural Network Accelerators for 3D Understanding.
+- *2014.09 - 2018.06*, Bachelor of Engineering in Microelectronics Science and Engineering, Sun Yat-sen University, Guangzhou, China. **• GPA: 3.8/5.0**
-# 🔥 News
-- *2022.02*: 🎉🎉 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
+
+
+# 🎖 Honors and Awards
+- Leo KoGuan Scholarship (1/157), *Peking University* 10/2023
+- Exceptional Award for Academic Innovation, *Peking University* 10/2023
+- Merit Student, *Peking University* 10/2023
+- Award for Scientific Research, *Peking University* 12/2022
+- Best Presentation Award, *IEEE CASS Shanghai and Shenzhen Joint Workshop* 5/2021
+- Merit Student, *Peking University* 10/2019
+- National Inspirational Scholarship, *Sun Yat-sen University* 10/2016
+- First Prize in the National College Students Metallography Skills Competition, *Sun Yat-sen University* 5/2016
+- National Inspirational Scholarship, *Sun Yat-sen University* 10/2015
+- First Class Scholarship, *Sun Yat-sen University* 10/2015
+
# 📝 Publications
-
CVPR 2016

+
JIOT 2023

+
+
+[Sagitta: An Energy-Efficient Sparse 3D-CNN Accelerator for Real-Time 3D Understanding, DOI: 10.1109/JIOT.2023.3306435.](https://ieeexplore.ieee.org/abstract/document/10224248/)
+
+**C. Zhou**, M. Liu, S. Qiu, X. Cao, Y. Fu, Y. He, and H. Jiao
+
+**2023**, *IEEE Internet of Things Journal (JIOT, IF=10.6, JCR Q1)*
+
+
+
+Abstract
+Three-dimensional (3D) understanding or inference has received increasing attention, where 3D convolutional neural networks (3D-CNNs) have demonstrated superior performance compared to two-dimensional CNNs (2D-CNNs), since 3D-CNNs learn features from all three dimensions. However, 3D-CNNs suffer from intensive computation and data movement. In this paper, Sagitta, an energy-efficient low-latency on-chip 3D-CNN accelerator, is proposed for edge devices. Locality and small differential value dropout are leveraged to increase the sparsity of activations. A full-zero-skipping convolutional microarchitecture is proposed to fully utilize the sparsity of weights and activations. A hierarchical load-balancing scheme is also introduced to increase the hardware utilization. Specialized architecture and computation flow are proposed to enhance the effectiveness of the proposed techniques. Fabricated in a 55-nm CMOS technology, Sagitta achieves 3.8 TOPS/W for C3D at a latency of 0.1 s and 4.5 TOPS/W for 3D U-Net at a latency of 0.9 s at 100 MHz and 0.91 V supply voltage. Compared to the state-of-the-art 3D-CNN and 2D-CNN accelerators, Sagitta enhances the energy efficiency by up to 379.6× and 11×, respectively.
+
+
+
+
+
+
+
+
DAC 2021

+
+
+[An Energy-Efficient Low-Latency 3D-CNN Accelerator Leveraging Temporal Locality, Full Zero-Skipping, and Hierarchical Load Balance](https://ieeexplore.ieee.org/document/9586299)
+
+**C. Zhou**, M. Liu, S. Qiu, Y. He, and H. Jiao
+
+**2021**, *IEEE/ACM Design Automation Conference (DAC)*
+
+
+
+Abstract
+Three-dimensional convolutional neural network (3D-CNN) has demonstrated outstanding classification performance in video recognition compared to two-dimensional CNN (2D-CNN), since 3D-CNN not only learns the spatial features of each frame, but also learns the temporal features across all frames. However, 3D-CNN suffers from intensive computation and data movement. To solve these issues, an energy-efficient low-latency 3D-CNN accelerator is proposed. Temporal locality and small differential value dropout are used to increase the sparsity of activation. Furthermore, to fully utilize the sparsity of weight and activation, a full zero-skipping convolutional microarchitecture is proposed. A hierarchical load-balancing scheme is also introduced to improve resource utilization. With the proposed techniques, a 3D-CNN accelerator is designed in a 55-nm low-power CMOS technology, bringing in up to 9.89x speedup compared to the baseline implementation. Benchmarked with C3D, the proposed accelerator achieves an energy efficiency of 4.66 TOPS/W at 100 MHz and 1.08 V supply voltage.
+
+
+
+
+
+
+
+
ICCAD 2023

-[Deep Residual Learning for Image Recognition](https://openaccess.thecvf.com/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf)
+[An Energy-Efficient Low-Latency 3D-CNN Accelerator Leveraging Temporal Locality, Full Zero-Skipping, and Hierarchical Load Balance](https://ieeexplore.ieee.org/document/9586299)
+
+**C. Zhou**, Y. Fu, M. Liu, S. Qiu, G. Li, Y. He, and H. Jiao
+
+**2023**, *IEEE/ACM International Conference On Computer Aided Design (ICCAD)*
+
+
+
+Abstract
+Three-dimensional (3D) point cloud has been employed in a wide range of applications recently. As a powerful weapon for point cloud analysis, point-based point cloud neural networks (PNNs) have demonstrated superior performance with less computation complexity and parameters, compared to sparse 3D convolution-based networks and graph-based convolutional neural networks. However, point-based PNNs still suffer from high computational redundancy, large off-chip memory access, and low parallelism in hardware implementation, thereby hindering the applications on edge devices. In this paper, to address these challenges, an energy-efficient 3D point cloud neural network accelerator is proposed for on-chip edge computing. An efficient filter pruning scheme is used to skip the redundant convolution of pruned filters and zero-value feature channels. A block-wise multi-layer perceptron (MLP) fusion method is proposed to increase the on-chip reuse of features, thereby reducing off-chip memory access. A dual-stream blocking technique is proposed for higher parallelism while maintaining inference accuracy. Implemented in an industrial 28-nm CMOS technology, the proposed accelerator achieves an effective energy efficiency of 12.65 TOPS/W and 0.13 mJ/frame energy consumption for PointNeXt-S at 100 MHz, 0.9 V supply voltage, and 8-bit data width. Compared to the state-of-the-art point cloud neural network accelerators, the proposed accelerator enhances the energy efficiency by up to 66.6× and reduces the energy consumption per frame by up to 70.2×.
+
+
+
+
+
+
+
+
TCAS-II 2023

+
+
+[An Energy-Efficient Low-Latency 3D-CNN Accelerator Leveraging Temporal Locality, Full Zero-Skipping, and Hierarchical Load Balance](https://ieeexplore.ieee.org/document/9586299)
+
+**C. Zhou**, Y. Fu, M. Liu, S. Qiu, G. Li, Y. He, and H. Jiao
+
+**2023**, *IEEE/ACM International Conference On Computer Aided Design (ICCAD)*
+
+
+
+Abstract
+Three-dimensional (3D) point cloud has been employed in a wide range of applications recently. As a powerful weapon for point cloud analysis, point-based point cloud neural networks (PNNs) have demonstrated superior performance with less computation complexity and parameters, compared to sparse 3D convolution-based networks and graph-based convolutional neural networks. However, point-based PNNs still suffer from high computational redundancy, large off-chip memory access, and low parallelism in hardware implementation, thereby hindering the applications on edge devices. In this paper, to address these challenges, an energy-efficient 3D point cloud neural network accelerator is proposed for on-chip edge computing. An efficient filter pruning scheme is used to skip the redundant convolution of pruned filters and zero-value feature channels. A block-wise multi-layer perceptron (MLP) fusion method is proposed to increase the on-chip reuse of features, thereby reducing off-chip memory access. A dual-stream blocking technique is proposed for higher parallelism while maintaining inference accuracy. Implemented in an industrial 28-nm CMOS technology, the proposed accelerator achieves an effective energy efficiency of 12.65 TOPS/W and 0.13 mJ/frame energy consumption for PointNeXt-S at 100 MHz, 0.9 V supply voltage, and 8-bit data width. Compared to the state-of-the-art point cloud neural network accelerators, the proposed accelerator enhances the energy efficiency by up to 66.6× and reduces the energy consumption per frame by up to 70.2×.
+
+
+
+
+
-**Kaiming He**, Xiangyu Zhang, Shaoqing Ren, Jian Sun
[**Project**](https://scholar.google.com/citations?view_op=view_citation&hl=zh-CN&user=DhtAFkwAAAAJ&citation_for_view=DhtAFkwAAAAJ:ALROH1vI_8AC)
- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
@@ -46,13 +136,10 @@ My research interest includes neural machine translation and computer vision. I
- *2021.10* Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
- *2021.09* Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
-# 📖 Educations
-- *2019.06 - 2022.04 (now)*, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
-- *2015.09 - 2019.06*, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
# 💬 Invited Talks
- *2021.06*, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet.
- *2021.03*, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus ornare aliquet ipsum, ac tempus justo dapibus sit amet. \| [\[video\]](https://github.com/)
# 💻 Internships
-- *2019.05 - 2020.02*, [Lorem](https://github.com/), China.
\ No newline at end of file
+- *2019.05 - 2020.02*, [Lorem](https://github.com/), China.
diff --git a/images/DAC.emf b/images/DAC.emf
new file mode 100644
index 00000000000..d1c674312dc
Binary files /dev/null and b/images/DAC.emf differ
diff --git a/images/ICCAD.emf b/images/ICCAD.emf
new file mode 100644
index 00000000000..03ef9145d19
Binary files /dev/null and b/images/ICCAD.emf differ
diff --git a/images/IOTJ.png b/images/IOTJ.png
new file mode 100644
index 00000000000..fcac0204d41
Binary files /dev/null and b/images/IOTJ.png differ
diff --git a/images/changchunzhou_half.jpg b/images/changchunzhou_half.jpg
new file mode 100644
index 00000000000..247016bf2e0
Binary files /dev/null and b/images/changchunzhou_half.jpg differ