Lade Inhalt...

Development of a PC-based Real Time Power Electronics Control Platform

Entwicklung einer Echtzeitregelungs- und Steuerungsplattform mit industriellen Standardkomponenten für leistungselektronische Anwendungen

©2005 Studienarbeit 178 Seiten

Zusammenfassung

Inhaltsangabe:Abstract:
In this thesis it was shown that it is possible to set-up a very fast and flexible realtime controller using only off-the-shelf components on a PC platform. To best fit the requirements, the used hardware and software has been chosen carefully and compared with other market products.
After evaluating the timing and handling structure, the real-time controller and the system drivers were implemented using the 'C' program language. To ensure flexibility, this program has been tested on two different hardware configurations. Timing analyses and measurements are compare for the two configurations.
All code is written to be easy to understand and quickly learned by students. To prove the system capabilities, a PI control algorithm has been implemented to control the torque of a DC machine. The system ran on a 333 MHz PII with algorithm times of about 600ns and an overall sampling rate of 20kHz.
This very flexible system, together with the real time operating system of RTLinux and Linux, makes it possible to build a control platform for power electronics which can be easily adapted to fit almost every requirement in a very short time. The superior computation power of standard PCs (and the possibility to upgrade the CPU) is a decisive advantage to common DSP solutions - especially when system cost is an important factor.
The use of open source operating systems enables full control and survey of the whole system. Software and utilities for almost all applications is available in source code. Competent help can be obtained from an active community via the internet as well as from professional companies.
Together, these facts ensure that the lifetime of this PC based control platform is not restricted by the availability of spare parts from single companies. And it is not limited by the software and hardware possibilities of the year 2001.

Inhaltsverzeichnis:Table of Contents:
1.Introduction1
1.1Organization2
1.2Conventions2
2.Requirements3
3.Choosing the Hardware4
3.1PC System4
3.2DAC and ADC Hardware6
3.3Hardware description9
4.Choosing the right Software16
4.1System Specific Demands16
4.2Standard Operating Systems (OSs) and Real-Time Control16
4.3Real Time Operating Systems17
4.3.1Definition of a Real-Time System17
4.3.2Used Definition of 'Real Time'20
4.3.3Market Overview20
4.4Characteristics of Suitable Real Time Operating Systems21
4.4.1The Ordeal of […]

Leseprobe

Inhaltsverzeichnis


ID 6632
Fiedel, Alexander: Development of a PC-based Real Time Power Electronics Control
Platform - Entwicklung einer Echtzeitregelungs- und Steuerungsplattform mit
industriellen Standardkomponenten für leistungselektronische Anwendungen
Hamburg: Diplomica GmbH, 2003
Zugl.: Nürnberg, Universität, Studienarbeit, 2005
Dieses Werk ist urheberrechtlich geschützt. Die dadurch begründeten Rechte,
insbesondere die der Übersetzung, des Nachdrucks, des Vortrags, der Entnahme von
Abbildungen und Tabellen, der Funksendung, der Mikroverfilmung oder der
Vervielfältigung auf anderen Wegen und der Speicherung in Datenverarbeitungsanlagen,
bleiben, auch bei nur auszugsweiser Verwertung, vorbehalten. Eine Vervielfältigung
dieses Werkes oder von Teilen dieses Werkes ist auch im Einzelfall nur in den Grenzen
der gesetzlichen Bestimmungen des Urheberrechtsgesetzes der Bundesrepublik
Deutschland in der jeweils geltenden Fassung zulässig. Sie ist grundsätzlich
vergütungspflichtig. Zuwiderhandlungen unterliegen den Strafbestimmungen des
Urheberrechtes.
Die Wiedergabe von Gebrauchsnamen, Handelsnamen, Warenbezeichnungen usw. in
diesem Werk berechtigt auch ohne besondere Kennzeichnung nicht zu der Annahme,
dass solche Namen im Sinne der Warenzeichen- und Markenschutz-Gesetzgebung als frei
zu betrachten wären und daher von jedermann benutzt werden dürften.
Die Informationen in diesem Werk wurden mit Sorgfalt erarbeitet. Dennoch können
Fehler nicht vollständig ausgeschlossen werden, und die Diplomarbeiten Agentur, die
Autoren oder Übersetzer übernehmen keine juristische Verantwortung oder irgendeine
Haftung für evtl. verbliebene fehlerhafte Angaben und deren Folgen.
Diplomica GmbH
http://www.diplom.de, Hamburg 2003
Printed in Germany

Table of Contents
1 Introduction...1
1.1 Organization...2
1.2 Conventions...2
2 Requirements...3
3 Choosing the Hardware...4
3.1 PC System...4
3.1.1 Bus: PCI or ISA based System?...4
3.1.2 CPU and System...5
3.1.3 The Used Hardware...6
3.2 DAC and ADC Hardware...6
3.2.1 The Data Acquisition Hardware...7
3.2.2 Used ADC and DAC Hardware...9
3.3 Hardware description...9
3.3.1 PCI Slave Carrier Board, APC8620...10
3.3.2 Analog Input Module: Acromag IP340...11
3.3.3 Analog Output Module: Acromag IP220...13
4 Choosing the right Software...16
4.1 System Specific Demands ...16
4.2 Standard Operating Systems (OSs) and Real-Time Control...16
4.3 Real Time Operating Systems...17
4.3.1 Definition of a Real-Time System ...17
4.3.2 Used Definition of 'Real Time'...20
4.3.3 Market Overview...20
4.4 Characteristics of Suitable Real Time Operating Systems...21
4.4.1 The Ordeal of Options...21
4.4.2 VxWorks...22
4.4.2.1 Tasks...23
4.4.2.2 Memory...23
4.4.2.3 Interrupts...24
4.4.3 QNX...24
4.4.3.1 Network ...24
4.4.3.2 Tasks ...25
4.4.3.3 Memory ...26
4.4.3.4 Interrupts ...26
4.4.4 LynxOS...26
4.4.4.1 Tasks...27
4.4.4.2 Interrupts...27
4.4.4.3 Memory...27
4.4.5 RT-Linux 3.0...27
4.4.5.1 Structure of RTLinux...28
4.4.5.2 Architecture...29
4.4.5.3 Task Types...29
4.4.5.4 Inter-Task-communication...30
III

4.4.5.5 Communication and Interaction with Linux Processes...30
4.4.5.6 A Typical RTLinux Application...30
4.4.5.7 Short Summary of the Advantages and Disadvantages of RTLinux
...31
4.5 Network...32
4.5.1 The Possibilities...32
4.5.2 The easy Solution...35
4.5.3 Built-in Alternative: Linux Telnet...36
5 Realization...37
5.1 The Software of the Control Platform ­ what should it do?...37
5.1.1 Controller Concept...37
5.2 Overview of the used Linux Functions...39
5.3 Using Kernel Modules...41
5.4 Introduction to PCI...44
5.4.1 Linux and the PCI subsystem...45
5.4.1.1 Reading and Writing to a Device...48
5.5 Accessing the Hardware...49
5.5.1 Acromag Carrier Board APC8620...49
5.5.1.1 General Hardware Programming...49
5.5.1.2 Programming the Carrier Board...51
5.5.2 Acromag IP340 ­ The A/D Module...52
5.5.2.1 General Hardware Programming...52
5.5.2.2 Programming the IP 340 ADC Module...58
5.5.3 Acromag IP220 ­ The D/A Module...60
5.5.3.1 General Hardware Programming...60
5.5.3.2 Programming the IP220 DAC Module...64
5.6 RTLinux...65
5.6.1 RTLinux Functions...65
5.6.2 RTLinux Programming...69
5.6.2.1 The First Example...69
5.6.3 Adding Interrupt Support...71
5.6.4 Adding RTFIFOs...71
5.7 Building the Control Platform Together...73
5.8 Linux - RTLinux communication...80
5.8.1 Linux Controls RTLinux...80
5.8.2 Observation of RTLinux signals...80
5.9 Implementation of Network: netcat...82
5.9.1 One Way Connections...82
5.9.2 Two Way Connections...82
6 Timing Evaluations...83
6.1 CPU speed comparisons...84
7 Implementing a Constant Torque PI Controller...86
8 Future Improvements...93
9 Summary...94
10 Zusammenfassung...95
IV

11 References and Resources...96
12 Appendix A: UNIX and Linux...101
12.1 What is UNIX?...101
12.2 So, what is Linux?...102
13 Appendix B: Installing the PC System...103
13.1 Red Hat Linux 6.2 Installation ­ Installation Protocol...103
13.2 Linux Security Annotation...104
13.3 Linux kernel- and RTLinux Installation...105
13.3.1 Preparations...105
13.4 Compiling the new Linux Kernel...105
13.5 Configure the Bootloader LILO...106
13.6 RTLinux Configuration and Compilation...107
13.7 Installing the RTLinux Documentation...108
13.8 The File 'kernelconfig.txt'...109
13.9 The File 'rtlinuxconfig.txt'...117
14 Appendix C: Using Linux...119
14.1 Special Linux Files of Interest...119
14.2 Linux' 'gcc'...119
14.2.1 Compile User Space Programs ...119
14.2.2 Compile Linux Kernel Modules...120
14.2.3 Compiling RTLinux Kernel Modules...120
14.2.4 Using Makefiles...120
14.3 Archiver 'tar'...121
14.4 The Text Editor 'joe'...121
14.5 KDE Enhanced Editor...122
14.6 Floppy Disk Access via 'mtools'...122
14.7 Mounting Devices and Partitions...122
15 Appendix D: 'netcat' Installation...123
15.1 Linux...123
15.2 Windows...123
15.3 Test...123
15.4 Used Command Line Parameters of nc...124
16 Appendix E: Taking Measurements...125
16.1 Performance Tests...125
16.1.1 Writing to DAC...126
16.1.1.1 Writing two DAC channels...126
16.1.1.2 Writing three DAC channels...126
16.1.1.3 Writing eight DAC channels...127
16.1.2 Writing to ADC...128
16.1.2.1 Writing two DAC channels + one ADC channel...128
16.1.2.2 Writing two DAC channels + two ADC channels...128
16.1.2.3 Writing two DAC channels + eight ADC channels...129
16.1.3 Writing to RTFIFO...130
16.1.3.1 Writing 2 DAC channels + put one value to the RTFIFO ...130
16.1.3.2 Writing 2 DAC channels + put two values to the RTFIFO ...130
V

16.1.3.3 Writing 2 DAC channels + put eight values to the RTFIFO ...131
16.2 The PI Controller...132
16.2.1 Writing 2 DAC channels, PI-controller and output of the result...132
16.3 The 'C40 - Benchmark'...132
16.3.1 Writing 2 DAC channels + the 'C40-Benchmark' algorithm...132
17 Appendix F: 'C' Code...133
17.1 The RTLinux program...133
17.2 The Linux-RTLinux control program...144
17.3 The Linux program to print the RTFIFO...146
17.4 The implemented PI controller...147
17.5 The used 'C40' Benchmark Code...149
17.5.1 The Definitions...149
17.6 The Algorithms...153
17.7 Function calls in RTLinux code...165
17.8 Pentium III 1000MHz machine...165
17.9 Pentium II 333MHz machine...166
17.10 I/O Hardware...166
17.11 Miscellaneous...166
VI

Index of Tables
Table 3.1 Bus system comparison: PCI vs. ISA...5
Table 3.2 Comparison of A/D hardware...8
Table 3.3 Comparison of D/A Hardware...9
Table 3.4 Acromag IP340 calibration errors...13
Table 3.5 Acromag IP220 calibration errors...14
Table 4.1 General OS comparison...17
Table 4.2 RTNet vs. netcat...35
Table 5.1 General Configuration Space of a PCI Card...45
Table 5.2 Configuration Space of APC8620...49
Table 5.3 APC8620 Carrier Bd. Memory Map...50
Table 5.4 Acromag APC8620 Carrier Status/Control Register...51
Table 5.5 IP340 ID Space Identification...52
Table 5.6 IP340 I/O Space Address Memory Map...53
Table 5.7 IP340 Channel Control/Status Register...54
Table 5.8 IP340 Channel Enable Control Register...55
Table 5.9 IP340 Digital Output Codes and Input Voltages...57
Table 5.10 IP220 ID Space Identification...60
Table 5.11 IP220 I/O Space Address Memory Map...62
Table 5.12 IP220 Structure of the Data for the DAC...63
Table 5.13 IP220 BOB Output Data Format ­ the 4 LSBs are shown as '0' ...64
Table 6.1 More detailed structure of one cycle, summary...84
VII

Illustration Index
Illustration 3.1 Acromag APC8620...10
Illustration 3.2 Acromag IP340...12
Illustration 3.3 Acromag IP220...14
Illustration 4.1 Deadlines...19
Illustration 4.2 Example for more than one discrete deadline...19
Illustration 4.3 RTOS market overview...20
Illustration 4.4 VxWorks kernel structure [35]...23
Illustration 4.5 QNX kernel structure [30]...25
Illustration 4.6 RTLinux structure [6]...29
Illustration 4.7 Example for RTLinux/Linux preemption [6]...31
Illustration 4.8 DHC and DCC versus stand alone...32
Illustration 4.9 RTLinux/Linux and netcat...34
Illustration 5.1 Timing structure of the controller...38
Illustration 5.2 Hierarchical PCI-bus-system...44
Illustration 5.3 Find PCI devices...47
Illustration 5.4 Time of channel bank conversions...56
Illustration 5.5 Overview of the system...74
Illustration 5.6 Program start: insmod...75
Illustration 5.7 RT FIFO handler...76
Illustration 5.8 The interrupt handler wakes up the thread...78
Illustration 5.9 Exit the program: rmmod...79
Illustration 5.10 Data sequence from RTLinux...81
Illustration 6.1 More detailed structure of one cycle...83
Illustration 6.2 Computation time of test algorithm...85
Illustration 7.1 Experiment set-up...87
Illustration 7.2 Control loop...88
Illustration 7.3 Speed - torque characteristics of the DC machines...88
Illustration 7.4 IP controller: Iref from 2A to 7A...90
Illustration 7.5 IP controller: Iref from 2A to 7A (zoomed in)...90
Illustration 7.6 IP controller: load change...91
Illustration 7.7 IP controller: Iref from 2.5 to 7.5A, K=0.02...91
Illustration 7.8 IP controller hardware set-up...92
Illustration 16.1 Outline of measurements...125
Illustration 16.2 Writing to two DAC channels...126
Illustration 16.3 Writing to three DAC channels...127
Illustration 16.4 Writing to eight DAC channels...127
Illustration 16.5 Writing to two DAC channels and read ADC once...128
Illustration 16.6 Writing to two DAC channels and read ADC twice...129
Illustration 16.7 Writing to two DAC channels and read ADC eight times...129
Illustration 16.8 Write two DAC channels and to RTFIFO once...130
Illustration 16.9 Write two DAC channels and to RTFIFO twice...131
Illustration 16.10 Write two DAC channels and to RTFIFO eight times...131
Illustration 16.11 Write to two ADC channels and the PI algorithm...132
VIII

List of Abbreviations and Acronyms
0000hex
number in hexadecimal format
0000dec
number in decimal format
0000bin
number in binary format
0x0000
number in hexadecimal format ­ usually used in code
386
intel Processor series from 386DX on
AGP
Advanced Graphics Port
API
Advanced Programming Interface
board
Motherboard ­ main part of the PC
BOB
Bipolar Offset Binary ­ the Acromag IP220 Module uses this format
CC
C Compiler
CPU
Central Processing Unit, also called processor
DDD
Data Display Debugger ­ Graphical Front-End of GDB [34]
DSP
Digital Signal Processor
GCC
GNU CC
GDB
GNU Debugger
GNU
Abbreviation of "GNU is not UNIX". GNU is a project of the Free
Software Foundation, its goal is to produce a freely distributable
UNIX software system
GPC
Gating Pulse Controller
GPL
GNU public license
HDD
Hard Disk
I/O
Input / Output, i.e. all terms of communication: HDD, network etc.
IGBT
Insulated Gate Bipolar Transistor
IPC
Inter Process Communication
ISR
Interrupt Service Routine
KDE
Common Desktop Environment ­ window manager for X.
LDT
Local Descriptor Table
LILO
LInux LOader. The Linux boot loader; also used to switch between
different Linux kernels or operating systems
LSB
Lowest Significant Bit
MBR
Master Boot Record of the hard disc
IX

MSB
Most Significant Bit
NIC
Network Interface Card
NMT
New Mexico Institute of Technology
OS
Operating System
P&P
Plug and Play
PC
Personal
Computer, abbreviation usually only used for x86
compatible computers
PCI
Peripheral Component Interconnect bus
PIC
Programmable Interrupt Controller
PnP
Plug and Play
RO
Read Only
RTFIFO
Real Time FIFO, FIFO of RTLinux; term used to draw a distinction
between the FIFO of RTLinux (RT FIFO) and the FIFO of the ADC
module (FIFO)
RTOS
Real Time Operating System
RW
Read and Write
SDRAM
Synchronous Dynamic Random Access Memory
threads
kernel- & user threads, here used for the term kernel-thread.
us
µs (micro seconds) ­ only used in 'C' code
VSI
Voltage Source Inverter
WO
Write Only
X
Abbreviation for X-Window ­ the graphical environment of Linux
x86
Intel compatible processor series from the i386DX on
X

C
HAPTER
1 I
NTRODUCTION
1 Introduction
During the last years the demands of modern power electronics and control have
steadily been rising and are still increasing today. Various kinds of control
applications such as motor, power-flow or inverter control need higher speed and
more precise algorithms as well as comfortable and user friendly human-machine
interfaces. Until now, usually expensive, special designed hardware, as integrated
DSP or FPGA boards have been used for these purposes. The disadvantages of this
hardware is inflexibility (fixed hardware) and the missing option to upgrade the
used, often self designed components as it is of common, if the requirements change.
Especially the average life cycle of semiconductor products as DSPs is a serious
problem nowadays. The hardware and software together is designed for a special
purpose and is restricted to the abilities and speed of the used components.
The main emphasis of this Studienarbeit is to prove that it is possible to develop a
PC-based real-time platform for power electronic control that combines the
flexibility of off-the-shelf PC hardware components, the possibility to use generic
software, as well as the timing precision and computation power of DSP systems -
without having the disadvantages of special designed implementations. As an
additional advantage the user interface and network capabilities of PC hardware
can be used.
This project will be the foundation for further development and enhancement.
Therefore suitable hardware and software should be chosen, a complete timing
strategy defined and then a base sample control program implemented. Extensive
documentation and testing should secure simple setup and reconfiguring of future
systems.
Timing analyses and the implementation of a controller algorithm will prove the
developed PC-based real-time platform to be fast, flexible and easy to program.
1

C
HAPTER
1 I
NTRODUCTION
1.1 Organization
To characterize the system, chapter 2 lists the requirements to be fulfilled for the
control platform. Then according to that hardware will be chosen: the PC system as
well as the I/O hardware.
Chapter 4 deals with the demands of an operating system and characterizes the
term ,,real-time OS". Then an appropriate operating system is chosen. Implementing
network-support will be shortly discussed.
After hardware and software is chosen, the structure of the future software program
is given. To introduce hardware access and programming in the Linux and RTLinux
environment, Linux, RTLinux and PCI specific programming is introduced. The
used hardware's functions are listed as well as how to implement their functionality
into the program. After finishing the ,,introduction" the complete controller
program's functionality is illustrated in flowcharts for a better overview.
Performance and timing are measured and the results listed in chapter 6.
To give a 'live' example of the implemented control platform, a constant torque DC
motor control is implemented using a PI controller.
"Future Improvements" lists some possible enhancements of this controller platform
that can be done to enhance its usability and show its flexibility.
The Appendix gives a short overview over Linux and UNIX, describes the software
installation of the PC system and the network package, and closes by giving some
hints how to use and program the system.
1.2 Conventions
Code listings use a monospace Courier font to differentiate from the regular
(Roman) text.
Shell interactions are also in Courier. The commands are prefaced with a
'[root@labpc7#]' prompt and appear in bold.
2

C
HAPTER
2 R
EQUIREMENTS
2 Requirements
The main goal of this project is to build a useful, flexible and reliable PC based
control platform suitable for (almost) all power electronic control tasks. Therefore
certain requirements have to be defined which the final control platform has to
fulfill:
hardware platform. A main issue is to use standard PC components which
ensure to participation in further developments of the PC market (in almost every
direction) i.e. speed or dual-processor capabilities.
speed. The controller should be able to control an attached system with at least
20kHz (8 channels input and output), with inputs sampled simultaneously as
time accurate as possible. The sampled data must be computed and outputted as
fast as possible ­ within the same cycle. The input-output 'delay time' ­ or
computation time - should be pre-known if possible, in order to include this delay
factor into the calculations. A suitable operating system must be chosen to assure
a proper foundation for the hardware-software interaction.
software, platform. The software for the system should be flexible, modular and
understandable to allow the system to be easily adapted to new control
environments.
software, control. External access for manipulating and observing some
(control) variables should be possible ­ also via network. A graphical control
environment might be implemented in future developments.
hardware, PC. As mentioned above the PC hardware should be off-the-shelf to
make further changes easy and possible without code change. Especially
increased computing speed should enable more complex control tasks.
hardware, I/O. The I/O hardware should be easily adaptable to new
environments as well. Inputs and outputs should be replaceable and 'upgradeable'
as far as possible. This task has to be done when choosing the I/O hardware.
I/O hardware capabilities. The input and output hardware should at least
have 8 channels ­ six for voltages and currents and two spare. A minimum of 12
bit resolution with an accuracy of maximal -+0.5% should be provided. The input
as well as output data should be sampled, respective outputted, simultaneously.
To assure time precise sampled values a conversion timer has to be onboard and
an external trigger input should be available. The ADC hardware must be able to
announce new data to the system.
This basic specification is the foundation for the further chapters. Based on this, the
hardware and software has to be chosen.
3

C
HAPTER
3 C
HOOSING THE
H
ARDWARE
3 Choosing the Hardware
3.1 PC System
One of the basic conditions for the project was to use off-the-shelf hardware with an
focus to future availability (as far as possible). So there are only a few points to be
checked out that the PC has to fulfill.
3.1.1 Bus: PCI or ISA based System?
If we focus on future availability, then choosing the bus system is one of the simpler
tasks ­ there is only the PCI and the ISA bus. Due to the fact that the ISA bus can
not be found on most of the latest motherboards for Intel compatible CPUs there is
little choice: the PCI bus 'wins'.
The PCI bus would have been first choice anyway. Table3.1 compares the PCI bus to
the ISA bus.
To assure highest compatibility all additional components (except the graphics card)
will be PCI cards.
4

C
HAPTER
3 C
HOOSING THE
H
ARDWARE
PCI
ISA
Bus Specification 32bit, 33MHz
16bit, 8MHz
Bus Speed
133MB/s (burst) [9]
8MB/s theoretical, 4-6MB/s real
[9]
Plug&Play
yes
depends on the card, the OS
has to support ISA PnP
Availability of
Systems
also available on most non
x86 machines
only on x86 architectures, new
(non-industrial) products don't
support ISA anymore
Availability of I/O
Hardware
almost everything available
for PCI
almost no new (industrial)
products, old products to be
discontinued
Future 'speed-
ups'
64bit and 66MHz available,
PCI-X with 133MHz
announced [44]
-
Annotations
in actual systems the ISA bus
is 'emulated' by a PCI to I/O
Bus Bridge which is connected
to the PCI bus. See illustration
5.2.
In tests for a 16 channel 14-bit
A/D card (sampling at 25KHz)
the ISA was considered too
slow [16]
Table 3.1 Bus system comparison: PCI vs. ISA
3.1.2 CPU and System
One of the benefits of Linux and intel compatible hardware is the easy scalability
and the huge amount of IBM compatible hardware components. So basically all
modern PC systems should fit our needs. That is why a fast off-the-shelf computer
will be used.
5

C
HAPTER
3 C
HOOSING THE
H
ARDWARE
3.1.3 The Used Hardware
Originally a Pentium III system was chosen. It is equipped with an ASUS CUV4X-D
dual processor motherboard with one intel Pentium III 1GHz onboard. The other
components were right off-the-shelf: 128MB SDRAM, 20GB IDE hard disk, 3COM
PCI NIC.
After encountering strange behavior in combination with the written control
program (system crashes ­ but only! ­ when reading from the RT FIFOS) a
different system was used. This system provided a stable platform ­ but was slower.
Because of not having other test hardware, the source of the problem (memory,
board incompatibility, only one processor in dual-board etc.) was eliminated.
This system is also an off-the-shelf-PC with an Intel PII 333 processor (P2, P2-333
machine). Appendix G will includes a precise listing of the used PC components.
3.2 DAC and ADC Hardware
This PC had now to be equipped with appropriate data acquisition (analog to digital
converter, ADC or A/D), and analog output (digital to analog converter, DAC or D/A)
hardware. Special focus (beside the need to meet specifications) is on ensuring the
hardware is easily to extended with additional in- and outputs and that the
hardware requirements are minimized (PCI slots, IRQs etc.) to prevent possible
configuration problems in different PC systems.
An interesting solution offers the use of so called IP Modules ­ also known as
Mezzanine Modules, Industry I/O Pack Modules ­ in combination with PCI carrier
cards.
They are ANSI standardized (ANSI/VITA 4 1995), versatile modules and provide a
convenient method of implementing a wide range of I/O, control, interface, analog
and digital functions. IP Modules, about the size of a business card, mount parallel
with a host carrier board, which provides host processor or primary bus interfacing,
as well as mechanical means for connecting the IP module's I/O to the outside world.
Typical carriers can include stand-alone processors, DSP-based carriers, as well as
desktop buses and VME-based boards. The ANSI specification includes mechanical,
host bus electrical, and logical definitions of I/O space, memory space, identification
space, interrupts, DMA, and reset functions. Two physical sizes, two fixed clock
rates, and multiple data widths (sizes up to 32 bits) are defined.
Used with a PCI carrier card, various required IP Modules can be easily plugged
together. This system offers some benefits compared to the use of single PCI cards
for D/A and A/D etc:
The driver-writer has only to deal with one physical PCI card. There will be less
problems concerning interrupts (possibly shared ones) or timing issues when
accessing two or more different pci cards at high frequency.
6

C
HAPTER
3 C
HOOSING THE
H
ARDWARE
Writing the driver will be easier because the carrier-board's access 'frame' will
always be the same (see from chapter 5.5 on). Additional cards or the replacement
of cards (with better ones) will be much easier ­ and less problematic.
In most of the cases all the data I/O hardware fits on one PCI carrier and
therefore occupies only one PCI slot in the PC.
A basic disadvantage of this cards is that the carrier board is some kind of
middleman between the modules and the PC system. Especially here, where the IP
Modules use an extra 16bit bus system with 8MHz frequency. The time to transfer
data might be higher than with using conventional cards.
There are suitable A/D and D/A modules from Acromag [21,22] available. These
modules have proven to be at least equivalent to other acceptable stand alone
products in terms of the demands of this project.
3.2.1 The Data Acquisition Hardware
Two possible alternatives to the IP Module are provided. They are:
DATEL PCI 417 series [32]
Data Translation 3000 [33]
The IP Module to be compared with is the Acromag IP340. The technical
specifications of this cards are summarized in table 3.1.
In control of power electronic systems, current and flux space vectors are calculated
through the simultaneous sampling of either two or more channels. A/D boards with
multiplexed inputs are therefore not considered.
7

C
HAPTER
3 C
HOOSING THE
H
ARDWARE
The Analog Output Hardware
Table 3.3 summarizes the technical specifications of a commercial D/A PCI card
(Data Acquisition Solutions PCI 6208V) and an IP Module from Acromag, IP220.
8
IP Module:
Acromag IP340
PCI card:
Data Translation
DT3001
PCI card:
DATEL PCI417D1
No. of
Simultaneous
Converted
Channels
8 (differential) or 16
(single)
16 (single) or 8
(differential)
16 (single)
FIFO, size
yes, 512 samples
yes (circular), 3952
samples
yes, 4K samples
Trigger A/D
by
external, timer,
software
external, timer,
software
external, timer,
software
Resolution
12 bit
12 bit
14 bit
P
ossibility of
additional
inputs
[yes]
2)
no
yes (e.g. same card)
3)
max.
sampling
freq.
1)
125 kS/s each channel 330 kS/s total
300 kS/s each
channel
Input range
+- 10 V
+- 1.25, 2.5, 5, 10 V
+- 2.5 V
PCI data
transfer
1 value per access
PCI burst possible
DMA, 2 values per
32bit
Miscellaneous trigger output
2 analog outputs, 8
digital I/O
Price
(approx.)
US$ 1000
US$ 1095
n/a
1) should be at least 40Khz ­ no pick criteria
2) Acromag IP340 is IP Module, additional inputs depend on used PCI carrier board
2) the DATEL PCI417 series consists of a PCI board with up to two daughter boards - in
this configuration one slot is free
Table 3.2 Comparison of A/D hardware

C
HAPTER
3 C
HOOSING THE
H
ARDWARE
3.2.2 Used ADC and DAC Hardware
All the checked hardware fits the project's demands and the state-of-the-art PCI
cards don't have reasonable advantages over the IP Modules. That, in addition the
reasons mentioned above, is why the solution with a PCI carrier card and IP
Modules is given preference: the Acromag IP340 and IP220 IP Modules will be used
as ADC respective DAC hardware.
3.3 Hardware description
After having chosen suitable ADC and DAC hardware a detailed description will
now be given. The data and facts are mostly based on the provided manuals [21-23]
­ additional information can be found therein.
The PCI carrier board has to be introduced first:
9
IP Module:
Acromag IP220
PCI card:
Adlink PCI 6208V
No of
Simultaneous
Output Channels
8
8
Trigger via
Software
Software
Resolution
12 bit
16 bit (14 bit guaranteed [34])
Settling time 1)
8
µs
2
µs
Output Range
+- 10 V
+- 10 V
Miscellaneous
4 digital I/O
Price (approx.)
500 US$
370
( 340US$)
1) should be at least 25
µs (40kHz) ­ no pick criteria
Table 3.3 Comparison of D/A Hardware

C
HAPTER
3 C
HOOSING THE
H
ARDWARE
3.3.1 PCI Slave Carrier Board, APC8620
This PCI card is a carrier for the Industrial I/O Pack (IP) Mezzanine Board Modules
- this specific board has 5 IP module slots.
All advantages of the PCI bus configuration can be used, i.e. plug-and-play, memory
space and interrupt configuration. Fitting the PCI standard, this card uses two
memory regions ­ one for the Configuration Registers and one for the Carrier Board
Memory Map. Through the latter, one full access to the IP Modules is possible.
The field connector to the outside world reside also on this board (see below, P2).
The card's PCI bus interface is used to program and monitor carrier board registers
for configuration and control of the board's documented modes of operation. In
addition, the PCI bus interface is also used to communicate with and control
external devices that are connected to an IP module's field I/O signals.
The PCI bus interface is implemented in the logic of the board's PCI target interface
chip. It implements the PCI specification version 2.1 as an interrupting slave (thus
no active DMA transfer is possible) including 8- and 16bit data transfers to the IP
modules.
Beside the PCI algorithms the logic IP interface is also implemented in the logic of
the board's FPGA. The carrier board implements the ANSI/VITA 4 1995 Industrial
I/O Pack logic interface specification and includes in this model five IP logic
interfaces. The PCI bus address and data lines are linked to the address and data of
the IP logic interface. The 8MHz clock for the IP module's clock circuitry is also
10
Illustration 3.1 Acromag APC8620

C
HAPTER
3 C
HOOSING THE
H
ARDWARE
provided.
Certain IP Modules can initiate interrupts. These are passed onto the PI if the
carrier board has been configured with interrupts enabled.
A power failure monitor is included on the carrier board. It will reset the carrier
cards when the +5V power drops below 4.27 volts typical / 4.15 volts minimum. This
circuitry is implemented as a part of the industrial I/O pack specification.
To avoid further damage the +5V, +12V and -12V supply lines to each IP module are
individually protected with a 2A, respectively 1A, fuse. These lines also contain
power supply filters to reduce noise from the PC's power supply. The fuses are
realized as T type filter circuits comprising ferrite bread inductors and a feed
through capacitor.
The electrical connection to the IP modules is via 2 connectors, the IP Field I/O
Connector and the IP Logic Interface Connector.
IP Field I/O Connector P2
This connector provides the field I/O interface connections for mating IP modules to
the carrier board.
On the module side there is a 50-pin female receptacle header which mates to the
male connector on the carrier board. Incorrect assembly is avoided by keyed
connectors.
The pin assignments are unique to each IP model.
Differential inputs require two leads: + and ­ per channel, and provide rejection of
common mode voltages.
IP Logic Interface Connector P1
P1 of the IP module provides the logic interface to the mating connector on the
interface board. This connector is a 50-pin female receptacle header which fits to the
male counterpart on the carrier board. These connectors are also keyed. As these
modules are standardized, the pin assignments are standard for all Industrial I/O
pack modules.
Logic lines not used by the modules are marked in the manuals.
3.3.2 Analog Input Module: Acromag IP340
The ADC module IP430 is an 12 bit analog simultaneous sampling module. It has
16 different analog input channels which are realized as two banks of eight channels
each. Only one bank can be converted simultaneously, the second bank of eight
channels has to be converted later. Therefore a user programmable delay counter is
implemented.
11

C
HAPTER
3 C
HOOSING THE
H
ARDWARE
This board consists of eight individual 12 bit successive approximation ADCs with
integrated sample and hold.
Each of the 16 channels can be chosen independently. The conversion time is 8
µs
which leads to the maximum conversion rate of 125 kHz.
The data collection can be done in
Single Cycle Conversion Mode ­ each conversion has to be initiated by writing
a conversion command to a register or by an external trigger signal, or in
Continuous Conversion Mode - employs a user programmable timer is
implemented to start conversion (this mode is currently used)
Calibration
Software calibration is possible. A reference voltage is provided so that software can
adjust and improve the accuracy of the analog input voltage over the uncalibrated
state. This voltage is measured at the factory and stored on the module.
Software calibration will not be used. More information is provided by the manual.
IP340 Calibrated Total Error +/- 1.6 LSB or 0.039% of Span.
12
Illustration 3.2 Acromag IP340

C
HAPTER
3 C
HOOSING THE
H
ARDWARE
Max.
Linearity
Error
+/- LSB (+/- %)
Max. Gain
Error
+/- LSB (+/- %)
Max. Bipolar
Zero Error
+/- LSB (+/- %)
Max. Total
Error
+/- LSB (+/- %)
Max.
Uncalibrated
Error
1 (0.0244)
2 (0.0488)
3 (0.0732)
6 (0.1464)
Max. Overall
Calibrated
Error
n/a
n/a
n/a
1.6 (0.039)
Table 3.4 Acromag IP340 calibration errors
Fault Protection
The input channels are protected against input over voltage up to +/- 25V power on
and +/- 40V with power off.
Timer
The IP340 has two built-in 24 bit timers. The first one starts conversion of the first
bank, the second timer starts conversion of channel 8 to 16 (bank 2) at the given
time after conversion of bank 1. The range of the conversion timers is about from
about 8
µs up to 2s cycles.
FIFO
The 16 channels share a generous 512 sample deep FIFO buffer. Data tagging is
implemented for easy identification of corresponding channel data. An FIFO
interrupt is possible when reaching an user programmable threshold condition. It is
possible to check for FIFO empty, full or threshold reached. A read of the FIFO
needs at least one wait state
3.3.3 Analog Output Module: Acromag IP220
The IP220 is a rather simple D/A board. It has 8 differential outputs with a
precision of 12 bit and a 8
µs output setting time ­ maximal 100kHz throughput rate
with 8 simultaneous channels.
13

C
HAPTER
3 C
HOOSING THE
H
ARDWARE
There are two possible output modes:
Simultaneous Mode: The data is written to the address specific channel's input
latches and when software triggered, all digital data is transferred to the output
latch simultaneously. (This mode is used)
Transparent Mode: The data written to the channel's input latch will be
automatically converted and transported to the board's field connector.
Calibration
The gain and offset error of each channel is measured at production time and stored
in an EEPROM on the module. These values can be read and used for software
calibration.
Table 3.5 shows the uncalibrated and the calibrated error ­ comparing to the
manufacturer's statements.
Max.
Linearity
Error
LSB (+/- %)
Max. Offset
Error
LSB (+/- %)
Max. Gain
Error
LSB (+/- %)
Max. Total
Error
LSB (+/- %)
Max.
Uncalibrated
Error
0.012
0.4
0.2
0.612
Max. Overall
Calibrated
Error
0.012
0.0061
0.0061
0.025
Table 3.5 Acromag IP220 calibration errors
Because the uncalibrated state is precise enough software calibration will not be
14
Illustration 3.3 Acromag IP220

C
HAPTER
3 C
HOOSING THE
H
ARDWARE
implemented. The manual includes formulae for computing the calibrated input
values.
15

C
HAPTER
4 C
HOOSING THE RIGHT
S
OFTWARE
4 Choosing the right Software
4.1 System Specific Demands
Unfortunately hardware alone is not enough. Only the combination of hardware and
suitable software leads to a working system.
Especially in power electronics, where reliable systems are a basic requirement, the
demands of an operating system are partially different from an usual OS:
'guaranteed' short reaction times (about 15
µs)
predictable scheduling
stability
speed
easy porting of the written software to other hardware
multiprocessor capability
preferably a simple, straight forward and effective API
4.2 Standard Operating Systems (OSs) and Real-Time Control
Normal Operating Systems focus on the the usability and human interaction. Well
known OSs are WINDOWS and UNIX based programs that come with a huge
variety of applications. To run a power electronic system with a complex control,
very different demands have to be fulfilled as stated in the previous section. Not
human interaction with slow reaction times dominates the needs of the control
system, but very fast and predictable real-time operation has to be used for these
requirements.
In particular desired short reaction times and predictable scheduling are not
provided by usual OSs. Linux, as an example, has a worst case interrupt latency of
about 200ms
1
.
There are also many other problems with 'traditional' systems such as:
memory swapping ­ slow access to I/O devices
malloc ­ provide new memory (possibly swapping necessary)
blocking in kernel mode (even processes with lower priority can block)
interrupts are managed by the system ­ the applications don't have direct access
time granularity of the scheduler
Table 4.1 shows the different objectives of standard, full featured OSs and RTOSs.
1 186ms with 'disc load', AMD K6-350 [15]
16

C
HAPTER
4 C
HOOSING THE RIGHT
S
OFTWARE
Over 20 years ago the first OSs were programmed especially to fit special needs.
They were called Real Time Operating Systems ­ short RTOS.
4.3 Real Time Operating Systems
4.3.1 Definition of a Real-Time System
In the previous chapter the question of the usability of standard OSs in real-time
applications occurred, but to go further, there has first to be defined, what ,,real-
time" means.
Different definitions of real-time systems exist. Here three different definitions are
given:
DIN44300: The real-time operating mode is the operating mode of a computer
system in which the programs for the processing of data arriving from the outside
are always ready, so that their results will be available within predetermined
periods of time. The arrival times of the data may be randomly distributed or may
already be determined depending on the different applications.
Koymans, Kuiper, Zijlstra [1]: A Real-Time System is an interactive system that
maintains an ongoing relationship with an asynchronous environment, i.e. an
environment that progresses irrespective of the RTS, in an uncooperative
manner.
Real-time (software) (IEEE 610.12 - 1990): Pertaining a system or mode of
operation in which computation is performed during the actual time that an
external process occurs, in order that the computation results may be used to
control, monitor, or respond in a timely manner to the external process.
In real-life the concept RTOS is usually used for all kinds of embedded systems. The
section 4.3.3 gives an impression of the flexible usage of this expression.
To build a real time system all its components (hardware and software) should
17
RTOS
full featured OS
optimize worst case
optimize average case
predictable schedule
efficient schedule
simple executive
wide range of services
minimize latency
maximize throughput
Table 4.1 General OS comparison

C
HAPTER
4 C
HOOSING THE RIGHT
S
OFTWARE
enable these RT requirements to be fulfilled. Traffic on a bus should take place in a
way allowing all events to be managed within the described time limit. An RTOS
should have all the features necessary to be a good building block for an RT system.
Note that real-time systems are not inevitably fast systems, but they are
predictable.
Almost all embedded systems are RT systems, which are specially constructed for
their use in a target environment. In an RT system, each individual deadline should
be met - lifeline (result must be delivered after lifeline ­ the first possible time when
a result can be delivered) and target line (time at which the designer aims to deliver
the result ­ usually the time of maximum benefit) are also called deadlines.
There are various types of (real-time) systems:
hard real-time: missing a deadline has catastrophic results for the system (a
good example is an airbag system) ­ power electronic systems typically fall in this
category;
firm real-time: missing a deadline entails an unacceptable quality reduction as
a consequence;
soft real-time: deadlines may be missed and can be recovered from. The
reduction in system quality is acceptable; e.g. a video conferencing system:
missing of one picture only reduces the quality of the transmission but does not
cause damages in the system;
non real-time: no deadlines have to be met.
Figure 4.1 plots the value-function for these systems. E.g. in a soft RT system
results still have some useful value or worth even if they miss the deadline.
18

C
HAPTER
4 C
HOOSING THE RIGHT
S
OFTWARE
The real-time demands are various. There are also tasks with more than one single
discrete deadline ­ figure 4.2 [3] gives an expression.
Because of the various real-time demands and application fields there are also tasks
with more than one single discrete deadline.
19
Illustration 4.1 Deadlines
Value
Delivery
Time
Soft Real Time
Value
Delivery
Time
Firm Real Time
Value
Delivery
Time
Hard Real Time
Illustration 4.2 Example for more than one discrete deadline
Life-
Line
Target-
Line
Soft
Deadline
Firm
Deadline
Hard
Deadline
Value
Delivery
Time

C
HAPTER
4 C
HOOSING THE RIGHT
S
OFTWARE
4.3.2 Used Definition of 'Real Time'
To 'fix' the concept 'real time', this term will be used fin this text for a system that is
able to meet hard deadlines ­ i.e. a system that can guarantee certain worst case
times. This applies only to the start of a process. It is 'almost' impossible to
guarantee the end of a process for the operating system itself without having
substantial information about the algorithm.
Unfortunately this does not depend only on the OS alone. The hardware and the
program writer play are also an important role to meet this deadlines.
4.3.3 Market Overview
As mentioned above, only an real-time OS offers the required time- and scheduling
preciseness. To get a short impression of the RTOS market the results of a poll of
Real-Time Magazine [13] are given in figure 4.3. Readers were asked about their use
or planed use of RTOSs.
This is not a representative market overview and unfortunately it also does not
show the used target platforms. Approx. 95% of all produced microprocessors are
used in embedded systems.
Therefore the RTOS market is very fragmented. About 25% of those questioned
20
Illustration 4.3 RTOS market overview
Win NT 13,04%
Win CE 8,70%
pSOS 7,61%
RTX 5,43%
RT-Linux 3,26%
prop. RTOS 6,52%
INTIME 2,17%
LynxOS 3,26%
Other 27,17%
VxWorks 15,22%
QNX 7,61%
RTOS Marketoverview

C
HAPTER
4 C
HOOSING THE RIGHT
S
OFTWARE
don't use an OS or use an OS with below 1% share of the market. Six percent use
their own proprietary OS.
Companies using RTOSs would not like to train their employees in different OSs, so
most of them consider the Microsoft Windows NT API as an defacto industry
standard for RT applications. Microsoft Windows CE 3.0 is upcoming (the older
versions didn't have suitable ­ real time - qualities) and many companies plan to
use it ­ that is the reason for 8% share of the market.
As mentioned, this diagram has to be read with care. Because no difference between
hard and soft RT and the type of application has been made. E.g. WindowsNT itself
offers no real-time capabilities [10] - although the Windows NT 4 embedded version
is often used in set-top boxes and other applications.
4.4 Characteristics of Suitable Real Time Operating Systems
To limit the number of RTOSs investigated only those fulfilling the following
requirements are considered:
stable
wide range support for PC hardware
good documentation
efficient API
easy hardware access
host = target environment (i.e. developing and testing on the same machine)
graphical environment for future enhancements
characteristics of hard RT OSs and technical demands should be fulfilled
For this project the 4 most interesting OSs for Intel x86 architectures have been
picked to show their qualities and compare them with our requests: VxWorks, QNX,
LynxOS and RT-Linux.
Although in the overview the Microsoft products were very popular, Windows CE
offers no hard-realtime capabilities and Windows NT needs special real-time
extensions. Also because of possible problems when dealing with the Windows NT
API these products were not considered to be used.
4.4.1 The Ordeal of Options
Let it be stated in advance that RTLinux has been chosen of the 4 possible solutions.
The problem of finding the best operating system is, that all the manufacturers
offer very good looking characterizations of their products. But usable and especially
independent comparisons of all these (partially very expensive) OSs are very rare.
All systems appear to meet the specifications.
21

C
HAPTER
4 C
HOOSING THE RIGHT
S
OFTWARE
In terms of speed, benchmarks are unreliable because the OSs differ very much in
kernel structure and programming of programs. Also these OSs main application
field is not the i386 PC architecture. A direct comparison of benchmarks from an
embedded PowerPC platform and an Intel PIII PC system is almost impossible ([15]
and [37] gives an impression).
RTLinux has been chosen because it is one of those popular open source systems
that seem to have qualities for future usage. Also the combination of the very
flexible Linux and a real-time component is very interesting and the RTLinux
community is very active. Especially the mailing list of [47], the two creators of
RTLinux Victor Yodaiken and Michael Barbanov often participate in discussions
and try to help.
The following subsections give an impression of the possibilities of the systems. The
RTLinux' introduction is a bit longer and is meant as an introduction to the system.
Programming related facts will be shown in the next chapter.
4.4.2 VxWorks
VxWorks was initially a development and network environment for VRTX and pSOS
Systems. Only later on Wind River Systems developed their own microkernel.
At the heart of the VxWorks run-time system is the wind microkernel. This
microkernel supports a full range of real-time features including multitasking,
scheduling, intertask synchronization/communication and memory management. All
the other functionality is implemented as processes.
22

C
HAPTER
4 C
HOOSING THE RIGHT
S
OFTWARE
VxWorks is very scalable. By including or excluding various modules, it can be
configured for the use in small embedded system with tough memory constraints to
complex systems where more functions are needed. Furthermore, individual
modules themselves are scalable. Individual functions may be removed from the
library or specific kernel synchronization objects may be omitted if they are not
required by the application.
4.4.2.1 Tasks
The VxWorks real-time kernel provides a basic multitasking environment. VxWorks
offers both POSIX and a proprietary scheduling mechanism (called wind
scheduling). Both a preemptive priority and a round robin scheduling mechanism
are available.
4.4.2.2 Memory
In VxWorks, all systems and all application tasks share the same address space.
This means that faulty applications could accidentally access system resources and
compromise the stability of the entire system. However, WindRiver Systems does
provide an additional component (VxVMI) that needs to be purchased separately
and that allows every process to have its own private virtual memory.
VxWorks also does not offer privilege protection, the privilege level is always 0
(supervisor mode).
23
Illustration 4.4 VxWorks kernel structure [35]
wind Kernel
SCSI Controller
Serial Controller
Clock Timer
Ethernet Controller
SCSI Driver
BSP
Network Driver
TCP/IP
I/O Systems
VxWorks Libraries
Tool ­ Applications
File System
Hardware Dependent Software
Hardware Independent Software

C
HAPTER
4 C
HOOSING THE RIGHT
S
OFTWARE
4.4.2.3 Interrupts
To achieve the fastest possible response to external interrupts, interrupt service
routines in VxWorks run in a special context outside of any thread's context, so that
there are no thread context switches involved.
4.4.3 QNX
QNX sells two different versions: QNX4 and QNX6 / RTP (Neutrino). QNX 4 is
partly compatible to the POSIX4 standard and doesn't use threads. In the basic
understanding the two systems are relatively similar. QNX6 handles some things in
a modern - evolutioned - way.
QNX is constructed as a host = target environment and has a client-server
architecture consisting of a lean microkernel that implements only core services and
optional cooperating processes. The microkernel itself is never scheduled. Its code is
executed only as the result of a kernel call, the occurrence of a hardware interrupt
or a processor exception.
QNX is a message based operating system. Message passing is the fundamental
means of QNX's IPC. The message passing service is based on the client-server
model: the client (e.g. an application process) sends a message to a server (e.g.
device manager), which replies with the result.
A client-server architecture has many advantages, of which robustness is one. The
price paid for this is performance: execution of system calls require a few context
switches (with an overhead produced by memory protection), resulting in somewhat
lower performance. Due to its architecture and the deep integration of message
passing and network messaging, QNX qualifies as a fully distributed operating
system.
4.4.3.1 Network
Distributed operating system means, that QNX integrates the entire network into a
single, homogeneous set of resources.
Any process on any machine in the network can directly make use of any resource
on any other machine. From the application's perspective, there is no difference
between a local or remote resource - no special facilities need to be built into
applications to make use of remote resources.
Users may access files anywhere on the network, take advantage of any peripheral
device, and run applications on any machine on the network (if they have the
appropriate authorization). Processes can communicate in the same manner
anywhere throughout the entire network.
24

C
HAPTER
4 C
HOOSING THE RIGHT
S
OFTWARE
Illustration 4.5 shows the QNX microkernel's responsibilities:
IPC - the microkernel supervises the routing of messages; it also manages two
other forms of IPC: proxies and signals
low-level network communication - the microkernel delivers all messages
destined for processes on other nodes
process scheduling - the microkernel's scheduler decides which process will
execute next
first-level interrupt handling - all hardware interrupts and faults are first
routed through the microkernel, then passed on to the appropriate driver or
system manager
4.4.3.2 Tasks
QNX is a multi-process system. It does have threads, but they are implemented in
an unconventional way that is substantially different from POSIX threads.
A QNX thread behaves more like a child process spawned by a parent process than
like an actual thread. When a QNX thread is created by a process, the thread will
use the same code and data segment as the process, as is the case with conventional
threads. However, certain objects like timers and file handles created by the parent
25
Illustration 4.5 QNX kernel structure [30]
Process
A
Process
C
Process
B
Network
Manager
Network Media
Scheduler
Network
Interface
IPC
Interrupt
Redicrector
Hardware
Interrupts

C
HAPTER
4 C
HOOSING THE RIGHT
S
OFTWARE
process cannot be accessed by the thread.
4.4.3.3 Memory
In QNX, every process has its own virtual memory, code and data segment, and
consequently its own Local Descriptor Table (LDT). This virtual memory is provided
by the paging mechanism in the Intel processor (the processor is used in protected
mode).
Every process has its own code and data segment, so when a process is deleted,
these segments will be deleted too. Hence, it is of the utmost importance that fixed
size segments are used. If variable size segments were used, memory space would
become fragmented due to the constant creation and deletion of variable size
memory blocks.
4.4.3.4 Interrupts
Interrupts are not disabled during the execution of an interrupt handler, so the
processor is always able to receive the interrupt signal from the IPC . However,
interrupts with the same or lower priority remain pending in the IPC for however
long interrupt handler execution takes. This can only be preempted by interrupt
handlers from a higher priority interrupt source.
Interrupt-to-task communication is limited. Only objects called proxies can be used.
In situations where a thread has to be scheduled from an interrupt handler, these
proxies are less powerful than semaphores.
4.4.4 LynxOS
LynxOS
is
a
UNIX-compatible,
POSIX-conforming,
multi
process,
and
multithreaded operating system designed for complex real-time applications that
require fast, deterministic response. The LynxOS kernel was specifically designed
for hard real-time applications.
The modularity inherent in the LynxOS architecture makes the operating system
highly scalable and configurable. At its smallest, LynxOS can be configured with
only the kernel and linked with an application to form a ROMable image for
specialized embedded applications. At its fullest, LynxOS is a self-hosted
development environment consisting of a wide array of software development tools,
UNIX-compatible utilities,
industry standard networking,
a
graphical user
interface, and a UNIX-like hierarchical file system.
LynxOS conforms to the POSIX 1003.1 system call interface standard and has been
implemented according to the POSIX 1003.1b real-time extensions and the 1003.1c
26

C
HAPTER
4 C
HOOSING THE RIGHT
S
OFTWARE
threads extensions. It also includes the 4.4 BSD system call interfaces and libraries,
which provide a high degree of source-level compatibility for applications written in
either flavor of UNIX ­ including Linux.
4.4.4.1 Tasks
LynxOS is a RTOS that has a strong focus on thread programming. There exist even
queueable kernel-threads inside the kernel level. To establish this, LynxWorks has
extended the gcc.
These threads are used in a slightly different way than usually. Without a closer
look at this it can be said, that it is possible to avoid priority inversion. Multiple
programs can enter the kernel via system calls. There kernel-threads take over not
only the parameters but also the priority of the calling program.
4.4.4.2 Interrupts
In a normal LynxOS interrupt service design, actual event service processing does
not occur in the ISR; instead, the kernel dispatches a LynxOS kernel thread in
response to the interrupt, which can be prioritized and scheduled on par with any
other thread in the system.
Kernel threads allow interrupt routines to be very short and fast. An important key
feature of LynxOS, kernel threads ensure predictable response even in the presence
of heavy I/O.
4.4.4.3 Memory
LynxOS supports the MMU of the processor and offers memory protection for all
running threads.
Demand Paging is also available ­ but because this paging system does not increase
predictability it can also be disabled in the kernel.
4.4.5 RT-Linux 3.0
The real-time variant of Linux, called RT-Linux, was originally developed at the
Department of Computer Science of the New Mexico Institute of Technology as part
of a master thesis of Michael Barabanov and his supervisor Victor Yodaiken.
The developer's idea is to let Linux as untouched as possible ­ the target was not to
destabilize the system, and to make integrating upcoming Linux developments easy
[25]. It works by inserting a small real-time executive between the hardware and
27

C
HAPTER
4 C
HOOSING THE RIGHT
S
OFTWARE
the classic Linux kernel. In fact, the non-real-time Linux kernel becomes the idle
task of the new real-time kernel, using a virtual machine layer to make the non-
real-time kernel fully preemptible.
In 1997 Victor Yodaiken founded FSMLabs [45] which developed RTLinux from
version 2 on.
Version history
Version 1 of RT-Linux - a research project [12] - was based on the 2.0 Linux
kernel. The latest release in this series was version 1.3, which was based on the
2.0.37 kernel. Version 2 - the first production version [12] - of RT-Linux is based on
the 2.2 Linux kernel. At the beginning of 2001 Version 3.0 - the first industrial
strength version [12] ­ has been released. It currently supports Linux 2.2.18 and
2.4 Kernel versions. Today RTLinux 3.0 is available for many different processor
platforms and not only for Linux, also for NetBSD [46]. The final RTLinux 3.1
version will soon be available very (see [46]).
4.4.5.1 Structure of RTLinux
In RTLinux realtime capable threads are managed by a modularized mini-kernel.
This kernel is independent from Linux. Realtime tasks are loaded like device drivers
(kernel modules) to the running system, so they have direct access to the hardware.
The Linux kernel itself is also loaded as a module ­ but with lowest priority (idle
thread).
The benefit of this architecture is time critical (i.e. direct) access to the hardware,
but there is only restricted communication with the Linux kernel and use of the
Linux system calls. Linux is intended to deal with all the non-real-time tasks as
basic I/O, including graphic and disk access, communication etc.
Figure 4.6 shows the position of the Linux kernel in the RTLinux system and the
communication possibilities. The RTLinux kernel is framed.
Different scheduling methods are available [42]. Because our project will only use a
very primitive FIFO scheduling mechanism there will be no focus on different
scheduling methods. [36], [41] give a good explanation of scheduling.
28

Details

Seiten
Erscheinungsform
Originalausgabe
Erscheinungsjahr
2005
ISBN (eBook)
9783832466329
ISBN (Paperback)
9783838666327
DOI
10.3239/9783832466329
Dateigröße
1.7 MB
Sprache
Englisch
Institution / Hochschule
Friedrich-Alexander-Universität Erlangen-Nürnberg – unbekannt
Erscheinungsdatum
2003 (April)
Note
1,0
Schlagworte
echtzeitbetriebssystem linux inbetriebsetzung neutrino
Produktsicherheit
Diplom.de
Zurück

Titel: Development of a PC-based Real Time Power Electronics Control Platform
book preview page numper 1
book preview page numper 2
book preview page numper 3
book preview page numper 4
book preview page numper 5
book preview page numper 6
book preview page numper 7
book preview page numper 8
book preview page numper 9
book preview page numper 10
book preview page numper 11
book preview page numper 12
book preview page numper 13
book preview page numper 14
book preview page numper 15
book preview page numper 16
book preview page numper 17
book preview page numper 18
book preview page numper 19
book preview page numper 20
book preview page numper 21
book preview page numper 22
book preview page numper 23
book preview page numper 24
book preview page numper 25
book preview page numper 26
book preview page numper 27
book preview page numper 28
book preview page numper 29
book preview page numper 30
book preview page numper 31
book preview page numper 32
book preview page numper 33
book preview page numper 34
book preview page numper 35
book preview page numper 36
book preview page numper 37
178 Seiten
Cookie-Einstellungen