논문 요약
1. Paper Bibliography
논문 제목
- Recurrent back-projection network for video super-resolution.
저자
- Haris et al.
출판 정보 / 학술대회 발표 정보
- Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
년도
- 2019
2. Problems & Motivations
논문에서 언급된 현 VSR 연구들에서의 문제점 정리 + 관련 연구
Frames can be aligned explicitly
- temporal frames 사이의 motion cues를 사용해 alignment modules을 통해 align [25, 2, 30, 27]
- Frame concatenation approach [2, 16, 25]: 많은 프레임들이 네트워크에서 동시에 처리되어 학습이 어렵고 concat되어 있어서 여러 모션을 나타내기가 쉽지 않다.
- RNNs [30, 27, 13]: Recurrent feedback으로 temporal smoothness가 잘 표현되나 미묘하고 눈에 띄는 변화를 모든 비디오 프레임에서 공동으로 모델링하는 것은 쉽지 않다.
Back-projection
- MISR에서 사용되던 아이디어 [14, 15]
- Back-projection은 iteratively하게 residual image를 계산해서 target image와 해당 이미지 sets 사이의 reconstruction error로 쓴다
- Residuals는 target image에 back-projected되어 resolution을 키우는데 쓰인다
- 여러개의 residuals는 target과 다른 frames 사이의 미묘하거나 눈에 띄는 차이를 나타낼 수 있다
- DBPN [8]: SISR에 back-projection적용 - 여러 up/down sampling layers를 통해 HR feature map을 iteratively refine했다
[2] Jose Caballero, Christian Ledig, Andrew P Aitken, Alejandro Acosta, Johannes Totz, Zehan Wang, and Wenzhe Shi. Real-time video super-resolution with spatio-temporal networks and motion compensation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[8] Muhammad Haris, Greg Shakhnarovich, and Norimichi Ukita. Deep back-projection networks for super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
[13] Yan Huang, Wei Wang, and Liang Wang. Bidirectional recurrent convolutional networks for multi-frame superresolution. In Advances in Neural Information Processing Systems, pages 235–243, 2015.
[14] Michal Irani and Shmuel Peleg. Improving resolution by image registration. CVGIP: Graphical models and image processing, 53(3):231–239, 1991.
[16] Younghyun Jo, Seoung Wug Oh, Jaeyeon Kang, and Seon Joo Kim. Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3224–3232, 2018.
[25] Ding Liu, Zhaowen Wang, Yuchen Fan, Xianming Liu, Zhangyang Wang, Shiyu Chang, and Thomas Huang. Robust video super-resolution with learned temporal dynamics. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 2526–2534. IEEE, 2017.
[27] Mehdi SM Sajjadi, Raviteja Vemulapalli, and Matthew Brown. Frame-recurrent video super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6626–6634, 2018.
[30] Xin Tao, Hongyun Gao, Renjie Liao, Jue Wang, and Jiaya Jia. Detail-revealing deep video super-resolution. In Proeedings of the IEEE International Conference on Computer Vision, Venice, Italy, pages 22–29, 2017.
3. Proposed Solutions
논문에서 제안하는 해결책들 정리
1. Network Atchitecture
- RBPN은 3가지 stage로 이루어짐
1. Initial feature extraction
1-1) target frame It가 LR feature tensor Lt에 mapping된다
1-2) target frame It, previous frame It-k, (It와 It-k사이의) flow map Ft-k, 를 concat한다. Ft-k가 It와 It-k사이의 사라진 디테일 추출에 도움을 준다
1-3) 이 stacked 8-channel "image"는 neighbor feature tensor Mt-k에 mapping된다
2. Multiple Projections
2-1) SISR path와 MISR path를 합쳐서 target frame의 refined HR features만듬
3. Reconstruction
3-1) Reconstruction module로 들어온 모든 HR feature maps를 concat하고 한 개의 convolutional layer 거쳐서 얻는다
class Net(nn.Module):
def __init__(self, num_channels, base_filter, feat, num_stages, n_resblock, nFrames, scale_factor):
super(Net, self).__init__()
# base_filter=256
# feat=64
self.nFrames = nFrames
if scale_factor == 2:
kernel = 6
stride = 2
padding = 2
elif scale_factor == 4:
kernel = 8
stride = 4
padding = 2
elif scale_factor == 8:
kernel = 12
stride = 8
padding = 2
# Initial Feature Extraction
self.feat0 = ConvBlock(num_channels, base_filter, 3, 1, 1, activation='prelu', norm=None)
self.feat1 = ConvBlock(8, base_filter, 3, 1, 1, activation='prelu', norm=None)
###DBPNS
self.DBPN = DBPNS(base_filter, feat, num_stages, scale_factor)
# Res-Block1
modules_body1 = [
ResnetBlock(base_filter, kernel_size=3, stride=1, padding=1, bias=True, activation='prelu', norm=None) \
for _ in range(n_resblock)]
modules_body1.append(DeconvBlock(base_filter, feat, kernel, stride, padding, activation='prelu', norm=None))
self.res_feat1 = nn.Sequential(*modules_body1)
# Res-Block2
modules_body2 = [
ResnetBlock(feat, kernel_size=3, stride=1, padding=1, bias=True, activation='prelu', norm=None) \
for _ in range(n_resblock)]
modules_body2.append(ConvBlock(feat, feat, 3, 1, 1, activation='prelu', norm=None))
self.res_feat2 = nn.Sequential(*modules_body2)
# Res-Block3
modules_body3 = [
ResnetBlock(feat, kernel_size=3, stride=1, padding=1, bias=True, activation='prelu', norm=None) \
for _ in range(n_resblock)]
modules_body3.append(ConvBlock(feat, base_filter, kernel, stride, padding, activation='prelu', norm=None))
self.res_feat3 = nn.Sequential(*modules_body3)
# Reconstruction
self.output = ConvBlock((nFrames - 1) * feat, num_channels, 3, 1, 1, activation=None, norm=None)
for m in self.modules():
classname = m.__class__.__name__
if classname.find('Conv2d') != -1:
torch.nn.init.kaiming_normal_(m.weight)
if m.bias is not None:
m.bias.data.zero_()
elif classname.find('ConvTranspose2d') != -1:
torch.nn.init.kaiming_normal_(m.weight)
if m.bias is not None:
m.bias.data.zero_()
def forward(self, x, neigbor, flow):
### initial feature extraction
feat_input = self.feat0(x)
feat_frame = []
for j in range(len(neigbor)):
feat_frame.append(self.feat1(torch.cat((x, neigbor[j], flow[j]), 1)))
####Projection
Ht = []
for j in range(len(neigbor)):
h0 = self.DBPN(feat_input)
h1 = self.res_feat1(feat_frame[j])
e = h0 - h1
e = self.res_feat2(e)
h = h0 + e
Ht.append(h)
feat_input = self.res_feat3(h)
####Reconstruction
out = torch.cat(Ht, 1)
output = self.output(out)
return output
2. Multiple Projection
- multiple projection stage는 encoder-decoder modules로 이루어진다
def forward(self, x, neigbor, flow):
### initial feature extraction
feat_input = self.feat0(x) # x: It, feat_input: Lt
feat_frame = []
for j in range(len(neigbor)): # number of n
feat_frame.append(self.feat1(torch.cat((x, neigbor[j], flow[j]), 1))) # concat(It, It-n, Ft-n) -> Mt-n
####Projection
Ht = []
for j in range(len(neigbor)):
# Encoder
h0 = self.DBPN(feat_input) # Input: L-n-1 (target features), h0: H(m)t-n
h1 = self.res_feat1(feat_frame[j]) # Input: Mt-n (neighbor features), h1: H(l)t-n-1
e = h0 - h1
e = self.res_feat2(e) # et-n (Residual)
h = h0 + e
Ht.append(h) # Ht-n (better HR features)
# Decoder
feat_input = self.res_feat3(h) # Lt-n (next LR features)
####Reconstruction
out = torch.cat(Ht, 1)
output = self.output(out)
return output
3. Interpretation
1) target image가 Net_sisr을 통해 HR feature만듬
2) Net_misr은 implicit frame alignmet를 하고 neighbor frames의 motion을 받아 HR warping features를 만듬 (target frames의 missing frame을 만듬)
3) 1에서 만든 residual features와 2가 1의 HR features로 fused back괴어 hidden state Ht-k를 만든다
4) decoder는 3을 가지고 다음 input Lt-k를 만든다
4. 입력의 형태
- Patch size: 64 x 64
- 7 Frames (PF: past-future)
5. 시간적 정보 모델링 프레임워크
기본 프레임워크 (2D CNN, 3D CNN, RNN, etc)
- RNN
구조에 기여한 바가 있다면?
- SISR, MISR아이디어 차용
- 계속해서 좋은 HR features를 만들어간다
- hidden state를 사용하지만 네트워크는 한개의 프레임을 output으로 만들어낸다
6. 프레임 정렬 방식
Implicit (암시적) or Explicit (명시적)
- Implicit +Explicit?
추가 설명
- neighboring, target, optical flow를 concat함
7. 업샘플링 방식
- SISR block, SISR block 모두 deconvolution을 해서 upsampling한다
8. 그 외
모델 파라미터 개수
-
학습 데이터
Vimeo-90K
- training set of 64,612 7-frame sequences, with fixed resolution 448 x 256
- apply augmentation (rotation, flipping, random cropping)
- LR images: downscale HR images 4x with bicubic interpolation
테스트 데이터
- crop 8 pixels near image boundary, remove first six frames and last three frames
- Y channel
Vid4
- 이 데이터 자체가 visual artifacts가 있고 interframe variation이 적으며 모션이 제한적이다
- 4개밖에 영상이 없음
SPMCS
Vimeo-90K
- motion velocity에 따라서 나눔
논문 분석
1. 앞서 정리한 논문들에 대한 비평들 중 해당 논문에서 해결된 바가 있다면 정리
ME, MC
3D Convolution
RNNs
2. 해당 논문에 대한 비평(Critique)
1) 단순 flow를 구하는 부분을 발전시킬 수 있을 것
2) implict하다고 표현했으나 잘 모르겠음
Google Scholar Link
- https://scholar.google.co.kr/scholar?hl=ko&as_sdt=0%2C5&q=recurrent+back+projection&btnG=
GitHub Link
- https://github.com/alterzero/RBPN-PyTorch
댓글