Lade Inhalt...

Complexity Optimized Video Codecs

©2003 Diplomarbeit 118 Seiten

Zusammenfassung

Inhaltsangabe:Abstract:
We are facing an increasing bandwidth in the mobile systems and this opens up for new applications in a mobile terminal. It will be possible to download, record, send and receive images and videosequences. Even if we have more bandwidth, images and video data must be compressed before it can be sent, because of the amount of information it contains. MPEG-4 and H.263 are standards for compression of video data. The problem is that encoding and decoding algorithms are computationally intensive and complexity increases with the size of the video. In mobile applications, processing capabilities such as memory space and calculation time are limited and optimized algorithms for decoding and encoding are necessary. The question is if it is possible to encode raw video data with low complexity. Single frames e.g. from a digital camera, can then be coded and transmitted as a video sequence. On the other hand, the decoder needs to be able to handle sequences with different resolution. Thus, decoder in new mobile terminals must decode higher resolution sequences with the same complexity as low resolution video requires.
The work will involve literature studies of MPEG-4 and H.263. The goal is to investigate the possibility to encode video data with low complexity and to find a way for optimized downscaling of larger sequences in a decoder. The work should include
- Literature studies of MPEG-4 and H.263.
- Theoretical study how CIF sequences (352x288-pixel) can be downscaled to QCIF (176x144-pixel) size.
- Finding of optimized algorithms for a low complexity encoder.
- Implementation of such an encoder in a microprocessor, e.g. a DSP.
- Complexity analysis of processing consumption.
Prerequisite experience is fair C-programming, signalprocessing skills and basic knowledge in H.263 and MPEG-4 is useful.
New mobile communication standards provide an increased bandwidth, which opens up for many new media applications and services in future mobile phones. Video recording using the MMS standard, video conferencing and downloading of movies from the Internet are some of those applications. Even if the data rate is high, video data needs to be compressed using international video compression standards such as MPEG-4 or H.263.
Efficient video compression algorithms are the focus of this thesis. Very limited computational capabilities of the terminals require low complexity encoder and decoder. A low complexity encoder for usage with […]

Leseprobe

Inhaltsverzeichnis


ID 9096
Krause, Michael: Complexity Optimized Video Codecs
Hamburg: Diplomica GmbH, 2005
Zugl.: Universität Rostock, Diplomarbeit, 2003
Dieses Werk ist urheberrechtlich geschützt. Die dadurch begründeten Rechte,
insbesondere die der Übersetzung, des Nachdrucks, des Vortrags, der Entnahme von
Abbildungen und Tabellen, der Funksendung, der Mikroverfilmung oder der
Vervielfältigung auf anderen Wegen und der Speicherung in Datenverarbeitungsanlagen,
bleiben, auch bei nur auszugsweiser Verwertung, vorbehalten. Eine Vervielfältigung
dieses Werkes oder von Teilen dieses Werkes ist auch im Einzelfall nur in den Grenzen
der gesetzlichen Bestimmungen des Urheberrechtsgesetzes der Bundesrepublik
Deutschland in der jeweils geltenden Fassung zulässig. Sie ist grundsätzlich
vergütungspflichtig. Zuwiderhandlungen unterliegen den Strafbestimmungen des
Urheberrechtes.
Die Wiedergabe von Gebrauchsnamen, Handelsnamen, Warenbezeichnungen usw. in
diesem Werk berechtigt auch ohne besondere Kennzeichnung nicht zu der Annahme,
dass solche Namen im Sinne der Warenzeichen- und Markenschutz-Gesetzgebung als frei
zu betrachten wären und daher von jedermann benutzt werden dürften.
Die Informationen in diesem Werk wurden mit Sorgfalt erarbeitet. Dennoch können
Fehler nicht vollständig ausgeschlossen werden, und die Diplomarbeiten Agentur, die
Autoren oder Übersetzer übernehmen keine juristische Verantwortung oder irgendeine
Haftung für evtl. verbliebene fehlerhafte Angaben und deren Folgen.
Diplomica GmbH
http://www.diplom.de, Hamburg 2005
Printed in Germany

1
If you have any questions or are interested in a more detailed resume of the author, please send an email to
Michael.Krause76@gmx.net
Michael Krause
Dipl.-Ing., EUR ING
Osterburger Str. 209b
39576 Stendal
Germany
Homestead Lane
PO Box 6362
Christchurch 8004
New Zealand
Email :
Michael.Krause76@gmx.net
Personal Data
Born November, 16
th
1976 in Stendal, Germany
Education and Professional Experience
1
02/2005 - present
University of Canterbury,
Christchurch, New Zealand
2004 ­ 2005
Siemens AG, Medical Solutions
Kemnath, Germany
2002 - 2004
Siemens AG,
Siemens Graduate Program (SGP)
Management Education Program at
Siemens Medical Solutions
Erlangen, Germany and
Shanghai, P.R. China
2001 ­ 2002
Ericsson Mobile Communications
AB Lund, Sweden; Master Thesis
and work experience
2000 ­ 2001
Lunds Tekniska Högskola (LTH)
Lund University, Sweden
1996 ­ 2000
University of Rostock, Germany
1995 ­ 1996
Civil Service in Stendal, Germany
1983 ­ 1995
Middle and High school Stendal,
Germany
PhD student within the Communications Research Group at the
Department of Electrical Engineering, Dissertation Topic:
"Multiuser Space-Time Systems"
Project manager in R&D area
Task: "Development of a new digital X-ray System for
Traumatology Applications"
International Management Education Program at Siemens
Medical Solutions
08/2003 ­ 03/2004 Siemens Shanghai Medical Equipment
(SSME) Ltd. Shanghai, P.R. China
"Controlling and Optimization of Production Processes"
12/2002 - 07/2003 Siemens AG Erlangen, Germany
"Strategic Marketing and Innovation Management"
04/2002 - 11/2002 Siemens AG Erlangen, Germany
"Hardware and Software Design of wireless User Interfaces for
Medical Applications"
Master Thesis ("Diplomarbeit"): "Complexity optimized Video
Codecs for 3G Mobile Phones"
Research and Development of a "Low Complexity MPEG/
H.26x Video Encoder"
Major: Signal Processing
Minor: Hardware and Software Design
Master Degree ("Diplom") in Electrical Engineering in 2001
Grading: "Excellent"
Major: Electrical Engineering
Minor: Circuit Design and Project Management
Bachelor Degree ("Vordiplom") in 1998
Grading: "Excellent"
Task: Homecare of elderly and disabled people

Diploma Thesis for Michael Krause
Complexity Optimized Video Codecs
Background
We are facing an increasing bandwidth in the mobile systems and this opens
up for new applications in a mobile terminal. It will be possible to download,
record, send and receive images and video sequences. Even if we have more
bandwidth, images and video data must be compressed before it can be sent,
because of the amount of information it contains. MPEG-4 and H.263 are stan-
dards for compression of video data. The problem is that encoding and decoding
algorithms are computationally intensive and complexity increases with the size
of the video. In mobile applications, processing capabilities such as memory
space and calculation time are limited and optimized algorithms for decoding
and encoding are necessary. The question is if it is possible to encode raw video
data with low complexity. Single frames e.g. from a digital camera, can then
be coded and transmitted as a video sequence. On the other hand, the decoder
needs to be able to handle sequences with different resolution. Thus, decoder
in new mobile terminals must decode higher resolution sequences with the same
complexity as low resolution video requires.
Task
The work will involve literature studies of MPEG-4 and H.263. The goal is to
investigate the possibility to encode video data with low complexity and to find
a way for optimized downscaling of larger sequences in a decoder. The work
should include
· Literature studies of MPEG-4 and H.263
· Theoretical study how CIF sequences (352x288-pixel) can be downscaled
to QCIF (176x144-pixel) size
· Finding of optimized algorithms for a low complexity encoder
· Implementation of such an encoder in a microprocessor, e.g. a DSP
· Complexity analysis of processing consumption
Prerequisites
Prerequisite experience is fair C-programming, signal processing skills and basic
knowledge in H.263 and MPEG-4 is useful.
Supervisors
Martin Kruszynski, Ericsson Mobile Platforms AB, Lund
Tel.: +46 (0)46-231549
Martin Stridh, LTH Lund, Tel.: +46 (0)46-2224655
Prof. Erika M¨
uller, University of Rostock, Tel.: +49 (0)381-4983579

Abstract
New mobile communication standards provide an increased bandwidth, which
opens up for many new media applications and services in future mobile phones.
Video recording using the MMS
1
standard, video conferencing and downloading
of movies from the Internet are some of those applications. Even if the data
rate is high, video data needs to be compressed using international video com-
pression standards such as MPEG-4 or H.263.
Efficient video compression algorithms are the focus of this thesis. Very limited
computational capabilities of the terminals require low complexity encoder and
decoder. A low complexity encoder for usage with MMS has been developed.
Furthermore, algorithms for computationally optimized downscaling of larger
sequences in a decoder are discussed.
The results from the low complexity encoder have shown that compression rates
up to 95% with suitable quality can be reached. A CPU with around 50MHz
can encode digital video at frame rates of 10 to 15 frames/s. The compression
standard is MPEG-4, which is well suited for computationally optimized encod-
ing. Thus, efficient implementations of a MMS video encoder are possible.
Based on literature studies, two algorithms for low complexity downscaling of
CIF sized
2
sequences to QCIF size
3
have been proposed. Both algorithms must
be evaluated in a practical test before the best approach can be determined.
1
Multimedia Messaging System: A standard, which allows composing of multimedia mes-
sages including text, sound, pictures and video.
2
Common Intermediate Format, 352 × 288 pixels
3
Quarter Common Intermediate Format, 176 × 144 pixels

Acknowledgements
I would like to express my gratitude to everyone who has contributed to this
thesis. The work has been carried out at the department of Multimedia Tech-
nology at Ericsson Mobile Platforms in Lund, Sweden.
I am very thankful to Martin Kruszynski at Ericsson Mobile Platforms for the
support and the help during my work in Lund.
Further, I am grateful to my supervisors Martin Stridh, LTH Lund and Profes-
sor Erika M¨
uller, University of Rostock. They gave me important insights in
video technology and many useful hints with this report.
Finally, thanks to the staff at the department of Multimedia Technology at
Ericsson, Lund for their help in my daily work.
ix

Contents
Abstract
vii
Acknowledgements
ix
Contents
xi
List of Figures
xv
List of Tables
xvii
Acronyms
xix
1
Introduction
1
1.1
3G: The Future of Mobile Communication . . . . . . . . . . . . .
1
1.1.1
From 1G to 3G . . . . . . . . . . . . . . . . . . . . . . . .
1
1.1.2
Multimedia Applications for 2.5G and 3G . . . . . . . . .
3
1.2
Video Applications in New Mobile Terminals . . . . . . . . . . .
4
2
Basics of Image and Video Coding
7
2.1
Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
2.1.1
Coding Fundamentals . . . . . . . . . . . . . . . . . . . .
8
2.1.2
Huffman Codes . . . . . . . . . . . . . . . . . . . . . . . .
8
2.1.3
Arithmetic Codes . . . . . . . . . . . . . . . . . . . . . . .
10
2.1.4
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
2.2
The Discrete Cosine Transform . . . . . . . . . . . . . . . . . . .
11
2.2.1
The Two-Dimensional Cosine Transform . . . . . . . . . .
12
2.2.2
Quantization . . . . . . . . . . . . . . . . . . . . . . . . .
13
2.3
Differential Pulse Code Modulation (DPCM) . . . . . . . . . . .
13
2.4
JPEG Picture Compression . . . . . . . . . . . . . . . . . . . . .
14
2.4.1
JPEG Overview . . . . . . . . . . . . . . . . . . . . . . .
14
2.4.2
Sequential DCT-based Coding . . . . . . . . . . . . . . .
16
2.4.3
Progressive DCT-based Coding . . . . . . . . . . . . . . .
17
2.4.4
Lossless Coding . . . . . . . . . . . . . . . . . . . . . . . .
17
2.5
Motion Estimation and Compensation . . . . . . . . . . . . . . .
18
2.5.1
Interframe Correlation . . . . . . . . . . . . . . . . . . . .
18
2.5.2
Motion Estimation and Compensation . . . . . . . . . . .
19
2.6
Block Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
2.6.1
Nonoverlapped, Rectangular Block Matching . . . . . . .
19
2.6.2
Matching Algorithm . . . . . . . . . . . . . . . . . . . . .
20
xi

Contents
2.6.3
Limitations and Improvements . . . . . . . . . . . . . . .
21
2.7
Formats of Digital Video . . . . . . . . . . . . . . . . . . . . . . .
21
2.8
MPEG-1/2 Standard . . . . . . . . . . . . . . . . . . . . . . . . .
22
2.8.1
Coding Model . . . . . . . . . . . . . . . . . . . . . . . . .
22
2.8.2
MPEG-1/2 Encoding . . . . . . . . . . . . . . . . . . . . .
23
2.8.3
MPEG-1/2 Decoding . . . . . . . . . . . . . . . . . . . . .
24
2.8.4
Enhancements in MPEG-2 . . . . . . . . . . . . . . . . .
26
2.9
MPEG-4 Video Standard . . . . . . . . . . . . . . . . . . . . . .
26
2.9.1
MPEG-4 Features . . . . . . . . . . . . . . . . . . . . . .
26
2.9.2
MPEG-4 Objects, Profiles and Levels . . . . . . . . . . .
27
2.9.3
MPEG-4 in Mobile Communications . . . . . . . . . . . .
30
2.10 H.263 Video Standard . . . . . . . . . . . . . . . . . . . . . . . .
30
3
Specification of a Low Complexity MMS Video Encoder
33
3.1
MMS Video and Video Conferencing . . . . . . . . . . . . . . . .
33
3.2
MMS Video Encoder Design Specification . . . . . . . . . . . . .
34
3.2.1
Main Structure . . . . . . . . . . . . . . . . . . . . . . . .
35
3.2.2
Specification of Precompression Stage . . . . . . . . . . .
36
3.2.3
Specification of Main Compression Stage . . . . . . . . . .
39
4
A Low Complexity MPEG-4 based Video Encoder
41
4.1
Encoder Structure . . . . . . . . . . . . . . . . . . . . . . . . . .
41
4.1.1
Processing of Intraframes . . . . . . . . . . . . . . . . . .
42
4.1.2
Processing of Interframes . . . . . . . . . . . . . . . . . .
43
4.1.3
Coding Mode Decision . . . . . . . . . . . . . . . . . . . .
48
4.1.4
Rate Control . . . . . . . . . . . . . . . . . . . . . . . . .
50
4.1.5
Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
4.2
Program Structure . . . . . . . . . . . . . . . . . . . . . . . . . .
53
4.3
Complexity Optimized Implementations of Encoding Functions .
58
4.3.1
General Optimization Rules . . . . . . . . . . . . . . . . .
58
4.3.2
Fast DCT Algorithm . . . . . . . . . . . . . . . . . . . . .
59
4.3.3
Fast Quantization . . . . . . . . . . . . . . . . . . . . . .
60
4.3.4
Fast Bit Output . . . . . . . . . . . . . . . . . . . . . . .
61
4.4
Encoder Program Syntax . . . . . . . . . . . . . . . . . . . . . .
61
5
A Low Complexity JPEG-LS based Video Coder
63
5.1
Encoder and Decoder Structure . . . . . . . . . . . . . . . . . . .
64
5.1.1
LoCo-I Encoding . . . . . . . . . . . . . . . . . . . . . . .
64
5.1.2
LoCo-I Decoding . . . . . . . . . . . . . . . . . . . . . . .
66
5.2
Program Structure . . . . . . . . . . . . . . . . . . . . . . . . . .
66
5.2.1
Encoder Program Structure . . . . . . . . . . . . . . . . .
67
5.2.2
Decoder Program Structure . . . . . . . . . . . . . . . . .
69
5.3
Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
70
5.4
Program Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . .
70
6
Results
73
6.1
Measurement Methods . . . . . . . . . . . . . . . . . . . . . . . .
73
6.2
Low Complexity MMS Video Coding Results . . . . . . . . . . .
74
6.2.1
Intraframe Coding . . . . . . . . . . . . . . . . . . . . . .
75
6.2.2
Results of MPEG-4 Interframe Coding . . . . . . . . . . .
77
xii

Contents
6.2.3
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . .
80
6.3
Practical Testing . . . . . . . . . . . . . . . . . . . . . . . . . . .
81
7
Application Example: Video Recording for MMS Video
83
7.1
The Ericsson Communicator . . . . . . . . . . . . . . . . . . . . .
83
7.2
A Video Application for MMS Video Recording . . . . . . . . . .
85
8
Video Decoding: CIF to QCIF Downscaling
89
8.1
Straightforward Downscaling . . . . . . . . . . . . . . . . . . . .
89
8.2
Computationally Optimized Downscaling . . . . . . . . . . . . .
90
8.3
Conclusions and Discussion of Implementation Aspects . . . . . .
93
9
Conclusions
97
Bibliography
99
A Test Sequences
101
B MATLAB Program for PSNR Calculation
103
Statements
107
Author's Declaration
109
xiii

List of Figures
1.1
Mobile Multimedia Devices . . . . . . . . . . . . . . . . . . . . .
3
2.1
Huffman Coding Tree . . . . . . . . . . . . . . . . . . . . . . . .
9
2.2
Example of Arithmetic Coding . . . . . . . . . . . . . . . . . . .
11
2.3
The 64 Basis Functions of a 8 × 8 DCT. . . . . . . . . . . . . . . 12
2.4
Zigzag Scanning of DCT Coefficients . . . . . . . . . . . . . . . .
13
2.5
2-Dimensional DPCM . . . . . . . . . . . . . . . . . . . . . . . .
14
2.6
Sequential and Progressive Coding of JPEG Pictures . . . . . . .
15
2.7
Block Diagram of JPEG Sequential DCT-based Encoding . . . .
16
2.8
Pixel Names for JPEG-LS Predictors . . . . . . . . . . . . . . . .
18
2.9
Block Matching Model . . . . . . . . . . . . . . . . . . . . . . . .
20
2.10 YUV 4:2:0 Format . . . . . . . . . . . . . . . . . . . . . . . . . .
22
2.11 A Group of Pictures in Display Order . . . . . . . . . . . . . . .
22
2.12 MPEG-1/2 Encoder Structure . . . . . . . . . . . . . . . . . . . .
24
2.13 MPEG-1 Compressed Bitstream . . . . . . . . . . . . . . . . . .
25
2.14 MPEG-1/2 Decoder Structure . . . . . . . . . . . . . . . . . . . .
25
2.15 Profiles in MPEG-4 . . . . . . . . . . . . . . . . . . . . . . . . . .
28
3.1
MMS Encoding Stages . . . . . . . . . . . . . . . . . . . . . . . .
35
3.2
MMS Encoder Block Diagram . . . . . . . . . . . . . . . . . . . .
36
4.1
Compression of Intraframes in MPEG . . . . . . . . . . . . . . .
42
4.2
Blocking Artifacts caused by Quantization . . . . . . . . . . . . .
42
4.3
Block Diagram of Standard Difference Frame Calculation . . . .
43
4.4
Block Diagram of the 1st simplified Difference Frame Calculation
Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44
4.5
Forward Loop for simple Difference Frame Calculation . . . . . .
44
4.6
Drift Effects in the Frequency Domain if the 1st simplified Inter-
frame calculation is used . . . . . . . . . . . . . . . . . . . . . . .
45
4.7
Example of the Drift Error Matrix . . . . . . . . . . . . . . . . .
47
4.8
Difference Frame Calculation in the DCT Domain . . . . . . . .
48
4.9
Frame Rate Control . . . . . . . . . . . . . . . . . . . . . . . . .
51
4.10 Compressed File Size after Encoding the first 300 Frames of
f oreman and mobile Sequences using Quantizer as Parameter .
52
4.11 Structure of M ain Function . . . . . . . . . . . . . . . . . . . . .
54
4.12 Structure of P utHeader Function . . . . . . . . . . . . . . . . . .
55
4.13 Structure of P rocessM b Functions . . . . . . . . . . . . . . . . .
56
4.14 Structure of Frame Encoding Functions . . . . . . . . . . . . . .
57
4.15 MPEG Encoder Program Syntax . . . . . . . . . . . . . . . . . .
61
xv

List of Figures
5.1
JPEG-LS Encoder Block Diagram . . . . . . . . . . . . . . . . .
64
5.2
Contouring Artifacts in Near-Lossless Coding Mode . . . . . . .
66
5.3
Structure of M ain Function (Encoder Part) . . . . . . . . . . . .
67
5.4
Structure of P rocessLine Function . . . . . . . . . . . . . . . . .
68
5.5
Structure of M ain Function (Decoder part) . . . . . . . . . . . .
69
5.6
Structure of U ndoP rocessLine Function . . . . . . . . . . . . . .
70
5.7
Syntax of the JPEG-LS based Encoder and Decoder Program . .
71
6.1
Comparison Aspects . . . . . . . . . . . . . . . . . . . . . . . . .
73
6.2
Intraframe Coding: Compression Rate Results
. . . . . . . . . .
75
6.3
Intraframe Coding: Mean PSNR . . . . . . . . . . . . . . . . . .
76
6.4
Subjective Quality of Lossy JPEG-LS and MPEG-4 Encoding . .
76
6.5
Intraframe Coding: Average CPU Cycles for Frame Encoding . .
77
6.6
Interframe Coding: Compression Rate Results . . . . . . . . . . .
78
6.7
Interframe Coding: Mean PSNR . . . . . . . . . . . . . . . . . .
78
6.8
Subjective Quality of both Difference Frame Calculation Ap-
proaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
6.9
Interframe Coding: Average CPU Cycles for Frame Encoding . .
80
6.10 Subjective Quality of Practical Tests using the f oreman Sequence 82
7.1
Ericsson Communicator . . . . . . . . . . . . . . . . . . . . . . .
84
7.2
Mode Selection Window . . . . . . . . . . . . . . . . . . . . . . .
85
7.3
Recording Window . . . . . . . . . . . . . . . . . . . . . . . . . .
86
7.4
Statistics Window . . . . . . . . . . . . . . . . . . . . . . . . . .
87
8.1
MPEG Decoder with Optimized Downscaling . . . . . . . . . . .
90
8.2
Motion Vector Downsampling . . . . . . . . . . . . . . . . . . . .
91
8.3
Mean PSNR for Salesman Sequence . . . . . . . . . . . . . . . .
94
8.4
Mean PSNR for Table Tennis Sequence . . . . . . . . . . . . . .
95
8.5
Difference PSNR for Table Tennis Sequence . . . . . . . . . . . .
95
A.1 F oreman sequence . . . . . . . . . . . . . . . . . . . . . . . . . . 101
A.2 M obile sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
xvi

List of Tables
2.1
Huffman Coding of DC Coefficients in JPEG . . . . . . . . . . .
16
2.2
JPEG-LS Predictors . . . . . . . . . . . . . . . . . . . . . . . . .
18
2.3
MPEG-1/2 Layers . . . . . . . . . . . . . . . . . . . . . . . . . .
24
2.4
Visual Profiles and Object Types . . . . . . . . . . . . . . . . . .
29
2.5
Visual Profiles and Level Definitions in MPEG-4 Version 1 . . . .
30
2.6
Main Differences between MPEG-4 Simple Profile and Baseline
H.263 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
3.1
Video Conferencing and MMS Video Encoding Features and Re-
quirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
4.1
Calculation of Difference Frames . . . . . . . . . . . . . . . . . .
43
4.2
Difference Frame Calculation using the 1st simplified Method . .
45
4.3
Step Size for quantizer scale Changes . . . . . . . . . . . . . . .
52
4.4
Update Value for Increasing quantizer scale
. . . . . . . . . . .
52
4.5
Update Value for Decreasing quantizer scale . . . . . . . . . . .
53
5.1
Sequence Header Format . . . . . . . . . . . . . . . . . . . . . . .
67
5.2
Scan Header Format . . . . . . . . . . . . . . . . . . . . . . . . .
68
6.1
Results from Encoding f oreman at bitrates of 256, 512 and 768
kbit/s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81
xvii

Acronyms
2-D:
2 Dimensional
3-D:
3 Dimensional
1G:
First Generation
2G:
Second Generation
2.5G:
2.5th Generation
3G:
Third Generation
3GPP:
3rd Generation Partnership Project
AC:
Alternating Current
AME:
Adaptive Motion Estimation
AMVR:
Adaptive Motion Vector Resampling
bps:
bits per second
CIF:
Common Intermediate Format
CODEC:
COder / DECoder
DC:
Direct Current
DCT:
Discrete Cosine Transform
DFT:
Discrete Fourier Transform
DPCM:
Differential Pulse Code Modulation
DSP:
Digital Signal Processor
EDGE:
Enhanced Data Rates for Global Evolution
ETSI:
the European Telecommunication Standards Institute
FDCT:
Forward Discrete Cosine Transform
fps:
frames per second
GOB:
Group Of Blocks
GOP:
Group Of Pictures
GPRS:
General Packet Radio Service
xix

Acronyms
GSM:
Global System for Mobile Communication
HSCSD:
High Speed Circuit-Switched Data
IC:
Integrated Circuit
IDCT:
Inverse Discrete Cosine Transform
IEEE:
Institute of Electrical and Electronics Engineers
IMTS:
Improved Mobile Telephone Service System
ISO:
International Standards Organization
ITU:
International Telecommunication Union
ITU-T:
International Telecommunication Union Telecommunications stan-
dardization sector
JPEG:
Joint Photographic Expert Group
JPEG-LS:
JPEG-LossLess
MAC:
Maximum Average Correleation
MAD:
Mean Absolute Difference
MATLAB:
MATrix LABoratory
MMS:
Multimedia Messaging System
MPEG:
Moving Pictures Expert Group
PDC:
Personal Digital Communication
PME:
Predictive Motion Estimation
PSNR:
Peak Signal to Noise Ratio
QCIF:
Quarter Common Intermediate Format
RGB:
Red Green Blue
ROI:
Regions of Interest
RVLC:
Reversible Variable Length Coding
SAD:
Sum of Absolute Difference
SNR:
Signal to Noise Ratio
UMTS:
Universal Mobile Telecommunication System
VLC:
Variable Length Code
WAP:
Wireless Application Protocol
xx

Chapter 1
Introduction
During the last 10 years mobile communication became a part of everyones life.
Mobile phones have changed the life of many people and nowadays it can easily
be said that the development of mobile phones was one of the milestones of the
past century. Nothing else besides the development of the personal computer
and the Internet has changed the world so rapidly. Already today the user is
able to do much more than just voice calls with a mobile phone. Dozens of
features have been included in mobile terminals. Sophisticated phones support
transmission of text messages, emails, pictures, ring signals and further, appli-
cations such as games, MP3-player, or voice control make a mobile phone to
a multimedia device. But what is the future? What applications and services
will come up in the next generation? This chapter gives a short overview of
state-of-the-art mobile phones and presents an outlook into the future mobile
phone technology. Some technical problems and challenges are also mentioned
and further, the topics of this thesis are introduced.
1.1
3G: The Future of Mobile Communication
Third Generation (3G) plays a keyrole in mobile communication and its ad-
vances towards mobile multimedia. In general, the name 3G is a generic term
for the next generation of mobile systems. So far, three generations have been
developed. Each of them more reliable, flexible and with higher capacity than
the previous.
1.1.1
From 1G to 3G
1st Generation
The roots of the first mobile communication system go back to the 1960s, when
Bell systems developed the Improved Mobile Telephone Service System (IMTS).
Later, in the 1970s and 1980s, the progress in microprocessor technology in
combination with new mobile communication concepts led to the first generation
of mobile communication systems.
This generation was based on analog signal transmission and offered low quality
voice services. The capacity of 1G systems was very limited and did not cover
large geographic areas.
1

Chapter 1. Introduction
2nd Generation
In the late 1980s, the development of digital data based wireless mobile net-
works resulted in the second generation of mobile systems. In Europe, GSM
(Global Systems for Mobile Communication) represents the 2G standard. First
GSM systems were introduced in 1991 and are still state-of-the-art in mobile
communications within Europe. More or less similar technologies to GSM have
been developed and introduced in America and Asia. In North America, 2G
is known as IS95 whereas Personal Digital Communication (PDC) is the sec-
ond generation of mobile systems in Japan. 2G wireless communication is a
voice centric network with limited data capabilities. Additional applications
such as fax, short message service as well as WAP services up to a data rate of
9.6kb/s are supported. Unfortunately, this data rate is far from suitable speed
for multimedia or web-based applications.
2.5G
The explosion of Internet usage and multimedia applications led to a high
demand for high-speed wireless data communication services. As mentioned,
the data rate available with 2G is too slow and new technologies are neces-
sary. 2G+ opens up for packet-based communication using a maximum data
rate of 384kb/s. Three technologies form the 2G+ specification: High Speed
Circuit-Switched Data (HSCSD), General Packet Radio Service (GPRS) and
Enhanced Data Rates for Global Evolution (EDGE). HSCSD allows data rates
up to 57.6kb/s by using four radio channel timeslots of 14.4kb/s at the same
time. GPRS is an intermediate step towards 3G. It was designed to allow
existing GSM networks to implement Internet services without waiting for full-
scale 3G systems. Its advantage is that it works together with existing GSM
and PDC systems. The maximum data rate is 171.2kb/s using 8 timeslots at
once. But since it is common practice to allocate only 2-4 timeslots to one user,
the maximum data rate cannot be used by costumers. EDGE is a technology
that increases the throughput per timeslot in HSCSD and GPRS systems. The
EDGE enhancement of GPRS, called EGPRS, allows a maximum throughput
of 384kb/s by using all 8 channels.
3rd Generation
The goal of the next generation is to provide a general mobile multimedia
standard, which brings the world of numerous standards together. 3G sys-
tems open up for a complete new world of multimedia and web-based applica-
tions. The framework for 3G was done by the International Telecommunication
Union (ITU) in the IMT-2000 project. The project definitions support voice
and data communications with data rates up to 144kb/s for high-speed mobil-
ity (more than 120km/h), 384kb/s for low-speed mobility (less than 120km/h)
and 2Mb/s for fixed-location terminals. Due to certain reasons, e.g. compatibil-
ity to the 2G world, 3G systems will not become a single global standard soon.
In Europe, UMTS is the 3rd generation of mobile systems and is expected to
run on a commercial level in 2003.
The following subsection gives a summary of multimedia applications which are
already available in 2.5G or will be introduced with the 3G systems.
2

Chapter 1. Introduction
1.1.2
Multimedia Applications for 2.5G and 3G
As mentioned, higher data rates of 2.5G and 3G systems will change the mobile
phones into multimedia devices. Then, the user can surf the web, download
music and picture files or, using a digital camera attached to the phone, photos
and movies can be recorded and sent to friends. Furthermore, increased com-
putational capabilities of the terminals allow 3D games and new technologies
such as Bluetooth open up for wireless short-distance communication in order
to play games with friends or exchange data with a personal computer. As a
consequence, a new world of mobile multimedia devices will emerge including all
features from todays portable electronics devices, pocket computers and mobile
terminals. Figure 1.1 shows the evolution towards mobile multimedia devices.
Figure 1.1: Mobile Multimedia Devices
Some of the features included in future 3G systems are already available in the
enhanced 2nd generation. An important technology is Multimedia Messaging
System (MMS). MMS has its roots in SMS (Short Message Service) and EMS
(Enhanced Messaging System), which support the transmission of text messages
(SMS) and, in the enhanced version, the exchange of simple pictures and ring
signals between mobile terminals of different manufacturers. In addition to text
MMS, can transmit messages containing graphics, photographic images, audio
and even video clips between mobile terminals using WAP as bearer technology.
A built-in or attached camera allows users to produce digital postcards or short
movie sequences with their MMS-enabled phone. Since MMS takes advantage of
the WAP technology, it is not limited to 3G systems and MMS-enabled phones
based on GPRS data transmission will be introduced on the market soon. MMS
is the first technology that combines digital pictures and movies with common
3

Chapter 1. Introduction
mobile phones. With future 3G terminals, a further video application called
video conferencing will be available. Video conferencing makes use of real-time
streaming
1
capabilities supported by 3G communication systems. Generally,
the implementation of video applications in mobile devices is a big challenge
for developers worldwide. The reason is that video information consists of large
amounts of data containing information for thousands of pixels together with
audio data. Even with increased bandwidth, video data cannot be transmitted
in its raw form and need to be compressed before it can be sent.
1.2
Video Applications in New Mobile Termi-
nals
As stated above, two different video applications will be available with high
data rates of 3G communication systems. At the first view, video conferencing
and MMS video recording seem very similar. But in opposite to MMS video,
video conferencing requires a real-time encoder and decoder in order to decrease
the amount of data that is transmitted. For example for high-speed mobility in
3G, the maximum data rate is 144kb/s. Thus, only around 50% or more accu-
rate 64kb/s are available for unidirectional transmission. This limitation leads
to decreased video quality and results in high requirements on encoding hard-
and software. In case of MMS video, the encoder does not need any real-time
streaming capabilities and therefore, data rate and quality are not necessarily
limited. Of course, technical implementation and transmission costs may limit
the quality of MMS video as well.
The design of a low complexity MMS video encoder is the main focus of this
thesis. First, the structure of such an encoder is developed and then, different
encoding approaches and coding schemes are discussed before a suitable low
complexity structure is proposed. The encoder design takes advantage of the
fact that mobile phones without the video conferencing feature do not need
real-time streaming capabilities. Test implementations of several algorithms
were done and the results concerning hardware requirements, encoding speed
and quality are discussed.
Another focus is downscaling of larger sequences. Many terminals will show
video sequences at QCIF size with 176 × 144 pixels. This format is recom-
mended by the 3G specification and will be used for streaming and MMS video.
But web-browsing capabilities allow downloading of larger video sequences, e.g.
at CIF size (352 × 288 pixels). This leads to the demand for real-time decoding
of those sequences using the limited capabilities available with the terminals.
Several downscaling possibilities are reviewed and a recommendation of suitable
algorithms is given.
The outline of this report is as follows:
· Chapter 2 gives an overview of general principles and standards of image
and video coding. Coding basics as well as important algorithms and
international coding standards are presented.
1
streaming refers to data transmission between different terminals
4

Chapter 1. Introduction
· In Chapter 3, the specification of a low complexity encoder for MMS
video is derived from user demands and technical limitations. It discusses
general features and requirements and presents main stages of such an
encoder.
· Chapter 4, describes a low complexity video encoder based on the MPEG-
4 standard. Both encoder design and software implementation have been
developed in this thesis.
· In Chapter 5, an alternative coding scheme for a low complexity video
encoder is discussed. The scheme is based on the JPEG-LS standard for
coding of still images. Its purpose is to combine good compression and
low complexity.
· Chapter 6 compares the results of the proposed encoder designs and selects
a design for a final low complexity MMS video encoder.
· Using the video encoder proposal of Chapter 6, Chapter 7 gives an example
of how a MMS video recording application could look like.
· Chapter 8 discusses the low complexity downscaling problem. This chap-
ter is based on literature studies and compares several downscaling ap-
proaches.
· Finally, Chapter 9 summarizes main conclusions of this work and gives an
outlook into optimizations of the work done in this thesis.
5

Chapter 2
Basics of Image and Video
Coding
Some basic knowledge is necessary to understand the development and design
processes of the following chapters. Therefore, this chapter introduces common
principles of image and video coding. In Section 2.1, a short summary of coding
fundamentals known from the information theory is given. Sections 2.2 and
2.3 introduce basic techniques such as Discrete Cosine Transform (DCT) and
Differential Pulse Code Modulation (DPCM). Both methods are widely used
by the JPEG picture compression standard, which is introduced in Section 2.4.
Important video compression standards such as MPEG and H.26x
1
are based on
JPEG. Commonly used principles of these standards are described in Sections
2.5 and 2.6. Especially estimation and compensation of motion are important
techniques to achieve both good quality and high compression while encoding
a sequence. The result are motion vectors, which describe the movement of a
certain picture area within consecutive frames. Afterwards, Section 2.7 explains
the format that is used for representation of digital video. The next step is to
introduce common video coding standards such as MPEG-1/2, MPEG-4 and
H.263; see Sections 2.8, 2.9 and 2.10.
2.1
Coding
Information can have many forms of appearance. For example information may
simply be expressed by a number and a unit such as current or speed. Or it can
have non-numerical characteristics such as speech or color. In general, coding is
a technique to make information accessible in technical systems. Section 2.1.1
gives an introduction into the coding theory whereas Sections 2.1.2 and 2.1.3
explain commonly used coding techniques.
1
MPEG refers to MPEG-1/2 and MPEG-4 whereas H.26x means H.261 and H.263 video
compression standards
7

Chapter 2. Basics of Image and Video Coding
2.1.1
Coding Fundamentals
An information source is represented by a source alphabet S
S = {s
1
, s
2
, ..., s
m
}
(2.1)
where s
i
are source symbols. An information message can be a source symbol,
or a combination of source symbols. Similarly, a code alphabet A is defined by
A = {a
1
, a
2
, ..., a
r
}
(2.2)
where a
j
are code symbols. Encoding is then the procedure to assign a codeword
to the source symbol
s
i
A
i
= {a
i1
, a
i2
, ..., a
ik
}
(2.3)
where the codeword A
i
is a string of k code symbols assigned to the source
symbol s
i
. In binary coding, the number of code symbols r is equal to 2, which
means only the digits "0" and "1" are available. The occurrence probabilities
of the source symbols s
i
can be denoted by p(s
1
), p(s
2
), ..., p(s
m
) and the length
of the codewords is given by l
1
, l
2
, ..., l
m
. The average length of the code is then
L
avg
=
m
i=1
l
i
p(s
i
)
(2.4)
Further, the entropy of the source S, defined as H(S), describes the average
amount of information contained in a source symbol. The information content
I of a symbol is defined as
I(s
i
) = -log
2
p(s
i
)
(2.5)
and then, H(S) is given by
H(S) = -
m
i=1
p(s
i
)log
2
p(s
i
)
(2.6)
2.1.2
Huffman Codes
The way of coding every source symbol with a code symbol of the same length is
not optimum. A better way is to adapt each source symbol to a code symbol with
different length according to the occurrence probability of the source symbols.
This results in less storage requirement and is computationally not very complex.
The Huffman code (Huffman, 1952) is such optimum code for an information
source with a finite number of source symbols in the source alphabet S [4]. At
present time, Huffman code is the most frequently used code in communication
and information technology.
Assigning Huffman Codes
In order to assign an optimal code to the source symbols, the occurrence prob-
abilities can be written as
p(s
1
) p(s
2
) ···p(s
m-1
) p(s
m
)
(2.7)
8

Chapter 2. Basics of Image and Video Coding
For an optimal code, the codeword lengths should be
l
1
l
2
···l
m-1
l
m
(2.8)
From equations (2.7) and (2.8) Huffman derived the following rules:
· The codeword length of a more probable source symbol should not be
longer than that of a less probable source symbol. Furthermore, in oppo-
site to equation (2.8), the codewords assigned to the least probable source
symbols should be the same. Otherwise, the last bit in codeword A
m
is
redundant.
l
1
l
2
···l
m-1
= l
m
(2.9)
· Each possible sequence of length l
m
- 1 must be used to generate an
optimum code.
Figure 2.1 shows the way of generating Huffman code using the previously de-
fined rules.
Code
Probability
1
01
001
000
P
d
=1/2
P
d
=1/4
P
d
=1/8
P
d
=1/8
1
1
1
0
0
0
Figure 2.1: Huffman Coding Tree
Modified Huffman Codes
In order to use Huffman coding, a set of all codewords called codebook is needed
for communication between the transmitter and the receiver. In case of a very
large number of improbable source symbols, the size of the codebook M will
require a large amount of memory
M =
iS
l
i
(2.10)
where l
i
denotes the length of the ith codeword. The memory problem can
be solved using a modified Huffman code, which results in a shorter codebook
length with almost the same efficiency.
First, the source alphabet S is categorized into two groups with
S
1
= {s
i
|p(s
i
) >
1
2
}
(2.11)
S
2
= {s
i
|p(s
i
)
1
2
}
(2.12)
9

Chapter 2. Basics of Image and Video Coding
where is the bit number of the codeword. A new source symbol ELSE (Weaver,
1978) with an occurrence probability equal to p(S
2
) is established [4]. Then,
the Huffman coding algorithm is applied to the source alphabet p(S
3
) with
S
3
= S
1
ELSE
Finally, the codebook of p(S
3
) is converted to that of S as follows:
· Codewords for the symbols in p(S
1
) keep unchanged.
· The codeword assigned to ELSE is used as a prefix for those symbols in
p(S
2
).
The memory required for the codebook is then
M =
iS
l
1
+ l
ELSE
(2.13)
which is much less than the memory size in equation (2.10).
2.1.3
Arithmetic Codes
Huffman coding is optimum concerning coding redundancy. However, it has
been shown that Huffman code can have very low entropy especially when the
difference in the occurrence probabilities of the source symbols is high. This in-
efficiency is caused by the block-based coding in the Huffman coding algorithm.
In opposite to Huffman coding, arithmetic coding is stream-based, which means
that a string of source symbols is coded as a string of code symbols.
Arithmetic encoding
In arithmetic coding, the source symbols are arranged according to their occur-
rence probability in the interval [0,1). Coding starts with picking up the first
subinterval in the [0,1) interval. Picking up means that any real number in the
subinterval can be a pointer to the subinterval and thus, can represent the first
source symbol. In order to code the next symbols, arithmetic coding uses an
recursive algorithm. The source symbols in the subinterval are arranged in the
same way as in the original [0,1) interval. Then, the subinterval representing
the next source symbol is picked up and so on. Figure 2.2 shows an example of
arithmetic coding. The final interval in the figure represents the source sequence
s
1
s
2
s
3
s
4
s
5
s
6
.
Arithmetic decoding
Decoding works in the opposite way. The decoder contains encoding information
of Figure 2.2 and compares the lower end point of the final subinterval with all
the end points of the first interval. Then, the first source symbol in the string
is decoded and the decoder switches to the next subinterval for decoding the
following symbol and so on.
An interesting fact is that only the first interval of Figure 2.2 must be known in
the decoder. With this knowledge, the decoder can reconstruct the subintervals
itself and undo the coding procedure.
10

Chapter 2. Basics of Image and Video Coding
0
1.0
(0, 0.3)
0.4
0.6
0.65
0.75
[0.3, 0.4)
[0.4, 0.6)
[0.65, 0.75)
[0.75, 0.1)
[0.6, 0.75)
0.3
0.09
0.12
S
1
S
2
0.18 0.195
0.225
0.3
0
0.009
0.102
S
3
0.108 0.1095
0.1125
0.12
0.09
0.1083
0.1044
S
4
0.1056 0.1059 0.1063
0.108
0.102
0.10569
0.10572
S
5
0.10578 0.105795
0.105825
0.1059
0.1056
0.105804
0.105807
S
6
0.105813 0.1058145 0.1058175
0.105825
0.105795
[0.09, 0.12)
[0.102, 0.108)
[0.1056, 0.1059)
[0.105795, 0.105825)
[0.1058175, 0.105825)
Figure 2.2: Example of Arithmetic Coding
2.1.4
Summary
Variable-length codes like Huffman codes are optimal for coding a finite number
of source symbols with different occurence probabilities. It is known that Huff-
man codes have minimum redundancy and therefore, Huffman coding is used
in a wide range of applications. Huffman coding has been adopted by many
international image and video standards such as JPEG, H.26x and MPEG.
Nevertheless, Huffman codes are not optimum if some source symbols have small
probabilities or their number is large. The size of the codebook will then in-
crease drastically and require large memory space. Modified Huffman coding
can solve the codebook problem but the block-based coding algorithm can pro-
duce non-redundant code with low entropy. Therefore, stream-based coding,
called arithmetic coding, has been developed to produce more effective code.
The problem of the algorithm is the precision needed to code long strings of
source symbols. Both Huffman and arithmetic coding have been included in
most image and video compression standards.
2.2
The Discrete Cosine Transform
The Discrete Cosine Transform (DCT) is an essential part of many image and
video compression standards. For picture coding, the two-dimensional DCT is
11

Chapter 2. Basics of Image and Video Coding
used, which is introduced in Section 2.2.1. Quantization is often connected to
the DCT and performed on the DCT coefficients. Section 2.2.2 explains why
quantization techniques play an important role in image and video coding.
2.2.1
The Two-Dimensional Cosine Transform
The DCT transforms a given number of input samples into a weigthed sum of
orthogonal waveforms. If for example eight pixel values are ordered in one line
the DCT response is a line of eight cos weighted values. In image and video
applications such as JPEG, MPEG or H.26x the DCT is based on blocks of 8×8
pixels. Equations (2.14) and (2.15) give the definition of the 8 × 8 forward and
inverse DCT.
F DCT : S
uv
=
1
4
C
u
C
v
7
i=0
7
j=0
s
ij
cos
(2i + 1)u
16
cos
(2j + 1)v
16
(2.14)
IDCT : s
ij
=
1
4
7
u=0
7
v=0
C
u
C
v
S
uv
cos
(2i + 1)u
16
cos
(2j + 1)v
16
(2.15)
C
u
C
v
=
1
2
f or
u, v = 0
1
otherwise
where s
ij
is the value of the pixel at position (i, j) in the block and S
uv
is the
transformed DCT coefficient.
Figure 2.3 shows the 64 basis images of such a DCT. The output is a matrix of
Figure 2.3: The 64 Basis Functions of a 8 × 8 DCT.
the same dimension as the input matrix. There, every element represents the
coefficient for the waveform at the same position as in Figure 2.3 shown. It is
obvious that in this figure the frequency increases along the diagonal from the
top left to the bottom right corner. Furthermore, zigzag scanning of the DCT
coefficients is employed. Zigzag scanning means reordering of the coefficients
along increasing frequency. Starting at the lowest frequency where f = 0, Figure
2.4 shows the frequency order.
12

Chapter 2. Basics of Image and Video Coding
Figure 2.4: Zigzag Scanning of DCT Coefficients
2.2.2
Quantization
In JPEG, MPEG and H.26x, quantization is applied to the DCT coefficients.
Generally, quantization is a method to reduce the amount of data. Since it is
not invertible, quantization always causes a loss of information and thus, the
quantization error is the main source of lossy compression. Most compression
standards use a quantization matrix to quantize the coefficients. Equation (2.16)
defines the quantization process for each point of the matrix.
S
Q
uv
= round
S
uv
Q
ij
(2.16)
where u, v, i, j = 0, 1 . . . 7
Similarly, dequantization of each point R
ij
is defined as
R
ij
= S
Q
uv
× Q
ij
(2.17)
An important aspect is the choice of the quantization matrix Q
ij
. By defining
Q
ij
, the visual properties of the human eye should be considered. For example
normally, quantization matrices are defined so that Q
ij
for higher frequencies is
much higher than for lower frequencies. The background is that the human eye
is significantly more sensitive for low frequent changes than for high frequent
changes. In this case, most of the high frequent DCT coefficients will be zero
after quantization and they do not need to be coded. The compression achieved
by quantization is significant whereas the visual quality remains good or at
least suitable for the desired application. It is noted that often the choice of the
quantizer is not only based on the results from psychovisual experiments. Rate
control is another important aspect when choosing a certain quantizer. Several
aspects in the selection of an optimal quantizer are mentioned in a later chapter.
2.3
Differential Pulse Code Modulation (DPCM)
DPCM coding is a very common technique in signal processing to remove re-
dundancy of signals. It is used in a wide range of applications and has been
adopted by international image and video standards.
In DPCM coding, the quantized difference between the signal itself and its
13

Details

Seiten
Erscheinungsform
Originalausgabe
Jahr
2003
ISBN (eBook)
9783832490966
ISBN (Paperback)
9783838690964
DOI
10.3239/9783832490966
Dateigröße
1.3 MB
Sprache
Englisch
Institution / Hochschule
Universität Rostock – Elektronik und Informationstechnik
Erscheinungsdatum
2005 (November)
Note
1,0
Schlagworte
videokomprimierung mpeg multimedia messaging system videoconferencing
Zurück

Titel: Complexity Optimized Video Codecs
book preview page numper 1
book preview page numper 2
book preview page numper 3
book preview page numper 4
book preview page numper 5
book preview page numper 6
book preview page numper 7
book preview page numper 8
book preview page numper 9
book preview page numper 10
book preview page numper 11
book preview page numper 12
book preview page numper 13
book preview page numper 14
book preview page numper 15
book preview page numper 16
book preview page numper 17
book preview page numper 18
book preview page numper 19
book preview page numper 20
book preview page numper 21
book preview page numper 22
book preview page numper 23
book preview page numper 24
book preview page numper 25
118 Seiten
Cookie-Einstellungen