2018年12月27日星期四

CityU Distinguished Lecture - How to Make an Artificial Vision System Smart?

Most artificial visual systems consist of cameras and computers, like eyes and brain of a human being, but with low level linkage model between the two parts comparing to a human being.  In this talk, Prof. Wen Gao would like to discuss how we can improve the linkage model to make the artificial vision system smart. Prof. Wen Gao (Member of Chinese Academy of Engineering; Boya Chair Professor and Director of Faculty of Information and Engineering Science, Peking University) was our guest speaker and his presentation topic named “How to make a Visual Computing and Cognitive (VCC) system smart”.


Firstly, Prof. Wen Gao compared the Engineering VCC system and Hman-like VCC system. Recently, a good tool for camera and computer linkage was Deep Learning by Neutral Network. 


Then Prof. Wen Gao briefed the eye evolution from compound eye to fish eye to bird eye and then to human eye.  Biological vision system included Eyes, Pathways and Visual Field of the Brain.


And then he compared Biological System and Artificial System of three key parts that were Digital Retina, Networking and Cloud Computing.  


VCC had four contributions as follows:
1.      Background Modeling based Surveillance Video Coding
2.      Visual Feature Compression for Visual Search
3.      Joint Rate-Distortion (R-D) and Rate-Accuracy (R-A) Optimization
4.      Standards for digital retina

Video Coding key issue was redundancy removal.  The background-modeling based surveil lance video coding was showed as following diagram.


Then the second contribution in visual feature compression was briefed.  Moving Picture Experts Group (MPEG) and Compact Descriptors for Visual Search (CDVS) were introduced. Recently, deep convolutional neural networks (CNN) 


And then he mentioned the joint R-D and R-A optimization model 


After that Prof. Wen Gao introduced standards for digital retina such as IEEE 1857.4 for video coding and MPEG 7-P13 for compact feature descriptor.


Prof. Wen Gao said that three steps for digital retina reality as follows:
-          Off-line processing, edge computing
-          Using GPU etc. to implement new features
-          Using ASIC to implement new features

Finally, Prof. Wen Gao showed his team’s recently research included coding compression and analytical cognitive.


He demonstrated their pulse retina chip for high speed motion such as hard disk rotation (upto 7200 revolutions per minute (rpm)).


At the end, he concluded that two steps for making a VCC smart by digital retina (improving camera with visual coding) and human-like retina (improving camera with spike coding).  The future work would be improving pathway functions between camera and perceptual system and making feedback functions between camera and perceptual system.

Group photo before lecture finished.


Reference:
CS Dept., CityU - https://www.cs.cityu.edu.hk/

沒有留言:

發佈留言