Multimedia Computing Towards Communications
lab targets at improving the efficiency of multimedia communication by developing multimedia computing approaches, benefiting from the success of computer vision and machine learning techniques. Specifically, along with the explosion of multimedia content, multimedia communications have become increasingly prominent in communication networks, affecting the daily life of billions of citizens and millions of businesses in the world. The popularity, as a consequence, inceases the amount of data over networks, which is expected to grow almost 40-fold in the next five years. Given the limited spectrum, multimedia applications have encountered the bandwidth-hungry bottleneck. On one hand, high spectral efficiency has been an ongoing request in communication development. On the other hand, pioneering research on delivering the perceived content of human is relieving the bandwidth-hungry issue from the perspective of perceptual compression and coding, in which computer vision and machine learning techniques have been actively studies. This is, however, what the MC2
is focusing on, that is, incorporating the state-of-the-art computer vision and machine learning methodologies into image/video compression and transmission to improve the efficiency of multimedia communications.
Our papers entitled “Removing Rain in Videos: A Large-scale Database and a Two-stream Convlstm Approach” and “Quality-gated Convolutional LSTM for Enhancing Compressed Video” have been accepted by ICME 2019 for oral presentations. Congratulations, Tie Liu and Ren Yang.
Our co-authored paper entitled “Learning QoE of Mobile Video Transmission with Deep Neural Network: A Data-driven Approach” have been accepted by IEEE JSAC.
Our papers entitled “Viewport Proposal CNN for 360° Video Quality Assessment” and “Attention Based Glaucoma Detection: A Large-scale Database and CNN Model” have been accepted by CVPR 2019. Congratulations, Chen Li and Liu Li.
Three of our papers have been accepted by ICASSP.
We are organizing a special issue of IEEE J-STSP (IF: 4.361), the title of which is Perception-driven 360-degree video processing. Please see Call for Paper.
Our paper entitled “A DenseNet Based Approach for Multi-Frame In-Loop Filter in HEVC ” has been accepted by DCC 2019. Our co-authored paper entitled “Texture-classification Accelerated CNN Scheme for Fast Intra CU Partition in HEVC” has also been accepted by DCC 2019.
Our paper entitled “Assessing Visual Quality of Omnidirectional Videos" has been accepted by IEEE Transactions on Circuits and Systems for Video Technology. Congratulations, Chen Li.
Our paper entitled “Fast H.264 to HEVC Transcoding: A Deep Learning Method" has been accepted by IEEE Transactions on Multimedia. Congratulations, Jingyao Xu.
Two of our papers entitled “Diversity-Driven Extensible Hierarchical Reinforcement Learning” and "Image Saliency Prediction in Transformed Domain: A Deep Complex Neural Network Method" have been accepted by AAAI-19. Well done, Lai Jiang, Yuhang Song and Jianyi Wang.
Mai Xu was invited by Professor Patrick Le Callet, Nantes Polytech, and gave a talk there.
Our paper entitled “Reducing Complexity of HEVC: A Deep Learning Approach” has been hot paper (July, 2018).
Our paper entitled "Enhancing Quality for HEVC Compressed Videos" has been accepted by IEEE Transactions on Circuits and Systems for Video Technology.
Prof. Mai Xu was invited by Professor Pier Luigi Dragotti, Imperial College London, and gave a talk there.
Our paper entitled as "Predicting Head Movement in Panoramic Video: A Deep Reinforcement Learning Approach" has been accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence.
- More ...