EVOLUTION OF DIGITAL SIGNAL PROCESSING BASED SPACECRAFT ...



Development of a High-Speed Multi-Channel Analog Data Acquisitioning Architecture

Linda M. Björk, Steven C. Persyn, Mark A. Johnson, Kelly D. Smith , Buddy J. Walls, Michael E. Epperly

Department of Space Systems

Southwest Research Institute

6220 Culebra Road

San Antonio, TX 78238

210-522-5395

lbjork, spersyn, majohnson, ksmith , bwalls, mepperly@swri.edu

Abstract- As the measurement techniques in the space science community rapidly evolve, the demand for multi-channeled, high-speed, radiation tolerant data acquisitioning systems get increasingly higher. The high volume and resolution of data, and the complexity of the in-situ processing and analysis requirements have triggered the need for faster, smaller, and easily reconfigurable designs.

In response to this demand, Southwest Research Institute has developed a high-speed, multi-channel, versatile data acquisitioning architecture. This new architecture can perform reconfigurable DSP algorithms and subsequent data processing on instrument analog input signals. The overall architecture and topology was developed as part of a SwRI science instrumentation trade study and then implemented on the NASA Gamma-ray Large Area Space Telescope (GLAST) Data Processing Unit (DPU). It can easily be reconfigured for other space mission and instrument applications.

This paper presents an architectural view of the design and gives examples of its tremendous versatility. It also addresses the evolution of analog DSP data acquisition systems in general and emphasizes the advantages and tradeoffs between today’s approaches versus older heritage methods.

Table of Contents

1. Introduction

2. architectural Overview

3. Example implementations

4. heritage techniques vs. high speed digitization

5. conclusion

1. introduction

THE NEED FOR HIGH-SPEED, MULTI-CHANNEL ANALOG DATA ACQUISITIONING SYSTEMS CAPABLE OF PERFORMING DSP FUNCTIONS AND EMPLOYING ANALOG SHAPING IS WIDESPREAD FOR SPACECRAFT AND INSTRUMENT COMPUTERS. INSTRUMENT APPLICATIONS FOR THIS TYPE OF A SYSTEM CAN RANGE FROM A HIGH-ENERGY OR OTHER TYPE OF ANALOG DETECTOR PULSE, TO A VIDEO INPUT, TO SOUND INFORMATION. IN GENERAL, THIS ARCHITECTURE IS APPLICABLE TO ANY TYPE OF INSTRUMENT WHOSE OUTPUT IS AN ANALOG WAVEFORM WHERE THE SCIENCE IS RELATED TO A CHARACTERISTIC OF THE ANALOG WAVEFORM, SUCH AS PULSE HEIGHT, RISE TIME, AND AREA OF THE PULSE. HISTORICALLY, THESE TYPES OF SYSTEMS WERE DESIGNED WITH MULTIPLEXERS TO SATISFY THE NEED FOR MULTIPLE CHANNELS, AND USED SLOW ANALOG TO DIGITAL CONVERTERS DUE TO PARTS UNAVAILABILITY FOR SPACE. BECAUSE OF THAT, THESE OLDER SYSTEMS WERE FORCED TO PERFORM ALL THE REQUIRED PROCESSING IN THE INITIAL ANALOG STAGE AND LATER IN A DSP CPU CALCULATION, REDUCING THROUGHPUT, RECONFIGURABILITY, AND ADDING OVERHEAD.

Southwest Research Institute’s answer to this problem is a new design architecture, which can easily be reconfigured for other analog data acquisitioning DSP applications. This architecture can perform low noise, high-speed, analog input processing with the ability to implement pulse shaping, real-time DSP of the digitized form of the analog input waveform, data compression, data formatting, and data throttling of the information to a multitude of data transmission interfaces, such as MIL-STD-1553B, SpaceWire, LVDS, and RS422 to name a few. One factor that has played a major roll in this new development is the availability of a high-speed, radiation tolerant ADC. The ADC chosen for this architecture is a flash-based, pipelined ADC with an integrated latch up protection ASIC. This ADC has speeds up to 10MSPS with single-ended and differential-ended input capabilities down to (20mV noise levels and 14-bit resolution.

2. Architectural overview

DETECTOR INTERFACE

The theory behind the DSP architecture developed by SwRI is very straightforward. The basic architectural flow is shown in Figure 1. The system accepts either single ended or differential analog inputs depending on the application. If redundancy is required, cross strapping can be implemented at this stage to feed the signals to redundant circuit boards. The accumulated pulses then pass through configurable signal conditioning circuitry, which perform functions such as pulse shaping, amplification, attenuation, and lowpass or bandpass filtering, in order to remove unwanted signals outside the bandwidth of interest and prevent aliasing. The conditioned analog signals are then distributed to a set of discrete 14-bit flash-based, pipelined ADCs for sampling. If the transmission rate is high enough ((10MSPS), each individual analog input can be equipped with a dedicated ADC for real-time conversion. If not, conditioned signals from several channels can be multiplexed together before being converted into digital (discrete) values. For input signal voltage protection, clipping diodes can be implemented at the signal conditioning input and at the ADC input in order to prevent the pulses from reaching harmful voltage levels.

The beauty of having a re-configurable design is the reusability. The type of detectors used may be completely different from mission to mission, the voltage levels may be different, the gain or filter frequencies as well, but the basic design stays practically the same. As far as the front-end electronics, resistor replacement is commonly all that is needed to change the functionality.

[pic]

Figure 1. Simple block diagram of architecture.

Digital Signal Processing FPGA

The streaming digital values from the ADCs are supplied to the FPGA, where real-time DSP algorithms are performed. Various algorithms can be implemented, such as pulse height detection, filtering, discrete Fourier transform calculations, area calculation (integration), ramp detection/calculation, and time delay measurements, to name a few. In order to maintain these real-time operations, the DSP must perform all its required computations within the ADC sampling interval, 1/fs.

In the past, these types of algorithms were exclusively developed in schematic capture, which was quite time consuming, but since the development trend shifted towards using powerful design tools, such as Verilog and VHDL, time and ultimately money have been saved, as well as raising the design possibilities to a higher level.

The most typical algorithm is the Pulse Height Detection (PHD) algorithm. It is often used in applications with radiation detectors, such as sodium iodide scintillators, NaI or bismuth germanate scintillators, BGO. The PHD algorithm, along with most science algorithms, is usually designed such that it uses a specific threshold value as a trigger for a state-machine to start capturing pulses. In this case, for each sample a value is captured and held. This held value is then compared to the next captured value. If this new value is higher, it will be held instead and the old value will be disregarded followed by another value being sampled. This process will go on until a new value is less than the old value, which will be considered the peak value. This is the simplest form of the algorithm. Often other variables are implemented such as threshold count filters and some form of peak wait count. Threshold count filtering provides the capability to screen out false peaks, filter periodic noise components, and correct for anomalies in the system. As an example, when the FPGA detects a count above the given ADC threshold, it verifies the pulse value is above that threshold for “X” data points before the real peak height search begins, effectively filtering out false pulses or noise. The peak wait count functions in a similar way. When a peak value has been detected, the state machine collects “X” more data points to verify the data points are below the peak value. This is done to confirm the peak value is real and not the result of noise components. After the peak value is confirmed, the peak value (peak height) is ready to be processed as an event.

Figure 2 shows this pulse height detection process in detail. The figure gives a summary of the state machine states and the filtering and event detection steps.

[pic]

Figure 2. Peak Height Detection Algorithm

The discrete Fourier Transform Algorithm, DFT, transforms digital data in the form of counts and/or a stream of digital values from the time-domain into the frequency domain, providing frequency plots of incoming signals. This is usually implemented in a DSP with the Fast Fourier Transform, FFT. A typical implementation could be measurements of oscillating entities; wave analysis.

There are two ways to calculate the FFT. Either the calculation is made every “X” data points, or continuously in a dynamic, streaming, mode, where “X” data points are collected, a calculation is made, and one new data point is shifted in, a new calculation is made, a new data point is shifted in, and so on.

Area calculation (integration) determines the area under the analog pulse above a certain voltage (ADC value) threshold. A common application is for high-energy particle detection with solid-state detectors. As the particle hits the detector it loses energy continuously as it travels through the sensor, yielding information about its energy. The area of the energy spectrum captured in the form of a pulse corresponds to the energy of the particle.

Above a threshold, the DSP FPGA can integrate all ADC values until the signal falls back below threshold. The integration error is based on sampling per the standard sampling theorem and integration technique. Integrated errors are usually insignificant.

Another typical science algorithm, the ramp rate calculation, is of particular usefulness. One common application of ramp-rate is for Langmuir probes, where the current versus voltage ratio is monitored by the detector electronics while ramping. The correlation between the two yields vital information about the temperature of the surrounding plasma, the potential of the spacecraft, and the type of particle dominance surrounding the probe.

Time delay measurements are very common in mass spectrometry, where the ions are deflected and the length of the time delay between when it enters the detector to when it exits corresponds to the mass of the particle. Basically what the algorithm does is calculates the time delay between two peaks, shift out the oldest value, grabs a new value, does a new calculation and so on [1].

For all the applications mentioned above, when data (or an event) is ready in the DSP FPGA, an event controller is signaled for distribution of these events. The data is sent out and the process starts over again. In parallel with this a deadtime delay counter can be implemented at the end of the pulse, which will put the state-machine in sleep mode for a given period of time. Typically this deadtime counter is started the moment an event is signaled to be ready, as shown in Figure 2.

Typical errors surface in all of these implementations if the sampling clock rate compared to the pulse rate are out of phase with the analog pulses. Luckily, pulse height errors are typically small and insignificant as long as the pulse width is ( 1/10 of the 10MSPS sampling rate (guarantees at least 10 samples per pulse).

Depending on the number of detector channels implemented in the system, multiple DSP FPGAs may be needed to handle all the data processing. As rule of thumb, a DSP FPGA can in general handle four to seven detector channels each, depending on the bit processing required.

Event Controller

When data (an event) is ready in the DSP FPGA, the DSP FPGA handshakes the event with an event controller, EC. In short, the EC is responsible for directing the data (event) to its proper destination and handling the timing with the transfer of the event. The data can be handled in many ways. Once the EC has new event data from the DSP FPGA, the EC can drive the data through compression hardware, send the data (event) directly to transmission interfaces, or supply the data to the CPU directly for processing (see Figure 1).

Compression methods available include hardware only, software only, or a mix of hardware and software. The most common compression method for hardware is through lookup tables, LUTs. LUTs can be used in two ways, for compression through binning or for error detection encoding, such as Reed-Solomon encoding. In a binning architecture, the data is throttled to data type specific lookup tables, such as SRAMs, for compression of i.e. 12-bit values to 8-bit values by using the data (event) value as the SRAM address and the compressed value as the SRAM read data. The lookup tables drive the compressed values to data type specific counting memories also controlled by the EC. The lookup tables, which is a form of lossy compression, is often used for sound, video, graphics, and picture applications, because of the unavoidable loss of information due to the truncation of the data. For these types of applications it is usually acceptable to loose some information, because it is a transparent loss to the source. Other examples of lossy compression are JPEG and MPEG data formats, which are done by the CPU. Another common compression method is the lossless compression algorithm, or redundancy reduction, which it is also commonly referred to. It is mostly applied to critical data, which cannot afford to loose any part of the data. The algorithm detects and encodes patterns in data, which makes it a less desirable option for compressing random data. An example of lossless compression is Rice compression.

Whether the data is compressed or not, the resulting data is typically stored as either raw events and/or the event values are counted over a specific period of time (accumulated). A multitude of event data types can be implemented, including time-tagged events, maximum value detection, minimum value detection, or total counts of an event within a specific time frame. For counted or accumulated event data types, the counting memory for each data type can be equipped with a timer. When it expires, the data is ready to be processed or transmitted. An interrupt will be sent to the CPU, where the data (event) will be read and processed before it is sent out to specific data interfaces or driven directly to FIFOs controlled by a transmission data formatter. In order to perform this kind of data shuffling in counting memories, ping-pong buffers are needed to provide the EC access to one buffer and the CPU or FPGA formatter/FIFO controller access to the other buffer, which contains expired data. When the expiration timers expire, the timer sends a toggle signal to the respective data type buffers. Every time that occurs, the storing memory bank for that data type switches. An interrupt is transmitted to the processor, which reads and clears the previous counting bank. Meanwhile, the other bank continues counting new events.

The inclusion of the DSP processor significantly expands the overall system capabilities. In addition to accessing data from ping-pong buffers, the CPU accesses data directly from a stream of raw data (Figure 1). Regardless of its origin, the CPU can compress the data in software and can later process the data through various types of algorithms, such as location algorithms, centroid algorithms and science algorithms. When the processing is done, the processed data can be transmitted out via several types of interfaces, including MIL-STD-1553B or SpaceWire.

The EC also controls the mission elapsed timer or vehicle time code, which is synchronized with the spacecraft clock and provides time-stamps for CCSDS packets and time tagging of data. Every time the DSP handshakes the EC, a time is latched. The time always has known embedded, fixed errors in the form of delays. They result from two sources mainly; the pipelined ADC (Figure 7), which delays the timing three clock cycles, and the peak wait count, which will contribute additional delays of “X” clock cycles (see Figure 2).

Data Interfaces and Formatting

The processed information from the CPU or the EC, can be handled in several ways. It can be temporarily stored in solid-state recorders, SSRs, or sent out via the MIL-STD-1553B bus, the SpaceWire bus, through a serial stream, or formatted into CCSDS packets, to name a few.

Solid-state recorders, SSRs, provide onboard mass data storage for sophisticated space based science instruments. These systems are most often stand-alone electronics housed in their own enclosure, but can in some cases also be consolidated with other circuit boards. The SSR can be used in different ways. It receives data directly from the EC and/or the CPU. The information can be stored in a raw format or as source packets depending on the needs for each particular application. Stored data can be played back and sent to the spacecraft computer for further processing or transmission to ground on an as needed basis. [2].

The MIL-STD-1553 bus network is typically used for transmitting telemetry like science data and housekeeping information to the ground via the spacecraft computer, and to receive telecommands from the spacecraft computer. It typically receives time-marked messages, ancilliary data, program memory uploads, and data table uploads from the spacecraft computer. MIL-STD-1553 is based on one bus controller and usually several remote controllers. In most cases the spacecraft computer serves the role as the system’s bus controller, directing the flow of the data. The data can be structured in a few different ways. It can be transmitted as a fixed value or structured as frames dictated by the Consultative Committee for Space Data Systems, CCSDS.

The SpaceWire bus is a serial data protocol based on IEEE 1355. Essentially, it’s a network, like MIL-STD-1553, targeted for space applications. It’s a high performance data handling infrastructure, interfacing all levels of spacecraft electronics. It’s composed of nodes and routers interconnected through bi-directional, high-speed digital serial links via LVDS/MLVDS interfaces. SpaceWire is now the standard of choice for all new ESA science instruments and is baselined into several NASA and ESA missions[3].

As far as serial transmission there are many bus options. When communicating at high data rates, or over long distances, single-ended standards like RS-232, are often inadequate. Differential data transmission offers the performance and noise tolerance needed in most space applications. Examples of differential buses are RS-485, RS-422, MLVDS, and LVDS [4]. The main advantage of the differential function is the effective cancellation of noise and other sources of interference. RS-485 and MLVDS are used for multipoint communication, while RS-422 and LVDS are used for point-to-point communication. RS-485 and RS-422 support transmission rates of about 10 Mbits/s, while LVDS and MLVDS are capable of supporting over 100Mbits/s, which makes LVDS/MLVDS the more widely used buses for high-speed applications [5].

The CCSDS, has, among many things, developed a packetization standard for telemetry. It is based on a layer encapsulation technique of data within headers and footers. The packetization occurs in a formatter engine, which, in its simplest form, pulls data from datatype specific FIFOs and formats the data into source packets. These can be transmitted as raw source packets or can be transformed into frames. These source packets or frames can, to name a few examples, be stored in solid-state recorders, or sent out via RS-422 or LVDS. The CCSDS source packets typically follow the source packet format described in Packet Telemetry Blue Book standards such as 102.0-B-4. Figure 3 shows how the protocol for formatting telemetry is divided into layers. Each layer adds protocol to the data from the layer preceding it.

[pic]

Figure 3. CCSDS Telemetry Model

The data is formatted in a user-defined protocol in the application process layer and the system management layer. Science data will nominally be formatted into CCSDS source packets. A source packet consists of a header in the beginning of the data and is sometimes ended with error detection and correction information, EDAC. The data within the packet is limited to 65536 octets. A segmentation flag and a source sequence counter ensure larger data sets are reconstructable once they reach their destinations. In the segmentation layer long source packets get divided up into smaller packets and get labeled with an application identifier. In the transfer layer the source packets are turned into a transmittable form by adding data synchronization and error detection information. The coding layer provides services to enhance the physical layer in the form of coding like Reed-Solomon encoding, or CRC coding etc. In the physical layer the frames are sent out via the physical interface to its destination, where they will be decoded [6].

3. Example Implementations

THE ADVANTAGE OF HAVING A RE-CONFIGURABLE ARCHITECTURE IS RE-USABILITY. SYSTEMS BASED ON THIS ARCHITECTURE CAN BE VERY COMPLEX WITH MULTIPLE DETECTORS EXISTING WITHIN A SYSTEM. THE GLAST BURST MONITOR (GBM), SHOWN IN FIGURE 4, IS AN EXAMPLE OF A HIGH PERFORMANCE, FULLY REDUNDANT SYSTEM BASED ON THIS ARCHITECTURE, DEVELOPED FOR THE NASA GLAST MISSION. THE BLOCK DIAGRAM IN FIGURE 5 PRESENTS AN OVERVIEW OF THE COMPLEX ARCHITECTURE FOR THIS IMPLEMENTATION.

The architecture also lends itself to space optimized implementations, such as the Miniaturized Optimized Processor for Space (MOPS), shown in Figure 6. The MOPS is an example of a miniaturized implementation of the SwRI DSP architecture. It clearly demonstrates the versatility this architecture can deliver in size as well as implementation. The GBM and MOPS systems look completely different and have completely different applications. However, they are both based on the same architecture.

A typical small system could consist of a scaled down version of the full system’s sub-sections, or only selected parts of it. For example, it may have fewer channels and/or ADCs. All processing may be done in the DSP FPGA and it may or may not be redundant or utilize compression. The data storage and interfaces may differ drastically.

[pic]

Figure 4. Box view of Data Processing Unit for GLAST.

[pic]

Figure 5. Example of a full system.

[pic]

Figure 6. View of unfolded MOPS electronics and housing.

4. heritage techniques vs. high speed digitization

HISTORICALLY, HERITAGE SAMPLE AND HOLD SYSTEMS HAVE BEEN BURDENED WITH THE USE OF MULTI-CHANNELS PROVIDED THROUGH MULTIPLEXERS AND THE USE OF SLOW ANALOG TO DIGITAL CONVERTERS. BECAUSE OF THAT, THESE OLDER SYSTEMS HAVE BEEN FORCED TO PERFORM ALL THE REQUIRED PROCESSING IN THE INITIAL ANALOG STAGE AND LATER IN A CPU CALCULATION. BOTH OF THESE TECHNIQUES HAVE DOWNSIDES TO THEM THAT MAKE THEM UNDESIRABLE. THE USE OF THE INPUT ANALOG STAGE FOR SOME OR ALL OF THE REQUIRED ANALOG INPUT SHAPING AND/OR PROCESSING, LEADS TO A SYSTEM THAT IS NOT EASILY RECONFIGURABLE. IN ADDITION, THE USE OF A CPU CALCULATION TO PERFORM POST ANALOG INPUT DSP PROCESSING REQUIRES A LOT OF OVERHEAD AND SLOWS THE OVERALL SYSTEM DOWN. SINCE THE ADVENT OF THE RADIATION TOLERANT, FLASH-BASED AND PIPELINED ADC, THE DESIGN CAPABILITIES HAVE DRASTICALLY IMPROVED. NOW IT IS POSSIBLE TO DESIGN LOW NOISE, LOW POWER, HIGH-SPEED SYSTEMS WITH THE ABILITY TO IMPLEMENT PULSE SHAPING AND REAL-TIME DSP OF THE DIGITIZED FORM OF THE ANALOG INPUT WAVEFORM.

With higher bit resolution and sampling rates, come more accurate and detailed representations of the analog input signals. That is as long as the Nyquist sampling theorem is met. Nyquist dictates that in order to ensure an accurate representation of an analog signal, the ADCs must sample at least twice the frequency of the analog signal. If this is not met, a part of the signal will be lost, which is known as aliasing. In the time domain, a violation will cause a drastic change in the signature of the analog signal. A sine wave for instance, will appear to have a much longer period. In the frequency domain, a violation will cause the introduction of unwanted signal components that were not part of the original signal. Unfortunately, there is no way to clean up the signal to the original shape. The ADC accuracy also depends on the error in the conversion. All ADCs suffer from quantization and non-linearity errors due to naturally occurring imperfections in the components and use of conversion tables. The errors are usually very small. [7]

The pipelined ADC or sub ranging quantizer, is nowadays the architecture of choice for fast ADCs in the range from a few MSPS to about 100 MSPS. The one chosen for this particular system architecture has a sampling rate of 10MSPS and a 14-bit resolution. It is low power,

[pic]

Figure 7. Flash-based, pipelined 14-bit ADC.

provides no missing codes, and has excellent temperature drift performance. This ADC is a multistage sub-range ADC, meaning the analog signal is quantized in several steps (Figure 7). It performs a coarse conversion in the MSB and a fine conversion in the LSB. The analog signal is sampled and held, while the first internal flash ADC coarsely quantizes the data. The output is driven through a DAC and is subtracted from the held analog input. The residue is amplified and fed into the next flash ADC. This goes on until it reaches the last 4-bit flash ADC, which resolves the last four LSBs. During the coarse of the data digitization, the bits get time-aligned in the shift registers before being fed into digital-error-logic. This is the pipeline action that accounts for the high throughput [8].

5. conclusion

THE SWRI DEVELOPED, RECONFIGURABLE DSP ARCHITECTURE FOR SPACE APPLICATIONS HAS MANY ADVANTAGES COMPARED TO THE TRADITIONAL APPROACH OF DESIGNING A DSP SYSTEM. THIS ARCHITECTURE IS MORE EFFICIENT, FASTER, COMPACT, REQUIRES FEWER RESOURCES AND SUPPORT CIRCUITRY, AND PROVIDES A CONSIDERABLY REDUCED DESIGN CYCLE.

Since the introduction of radiation tolerant high-speed ADCs, the resolution of captured data has increased tremendously, on average close to a hundred-fold, making this architecture ideal for missions targeting high rate pulses.

The reconfigurable DSP FPGA algorithm is the core of the architecture. By processing most or all accumulated data in the FPGA, the CPU can be used for other tasks. This used to be one of the limiting factors on previous systems. Also, since very little mission specific data is processed in the analog front end, the design does not have to be altered much, if at all, between different implementations. There is a multitude of options available for data processing, compression and storage, and physical interfaces.

So far, no implementation has proven to be too small or too big for this architecture. Heritage designs lead to proven, safer systems, limiting risk and increasing chance of mission success.

References

[1] WALLS, BUDDY, ET.AL., DEVELOPMENT OF A RAD HARD 2 GHZ ACQUISITION SYSTEM FOR SPACE, 2000 IEEE AEROSPACE CONFERENCE, BIG SKY MONTANA

[2]

[3]

[4]

[5] 2000, National Semiconductor LVDS Owner’s Manual, Rev2.0

[6] Epperly Michael E. et. al., Mission Adaptable CCSDS Formatter/Command Decoder, 2000 IEEE Aerospace Conference, Big Sky Montana

[7]

[8] 2002, Maxwell Technologies 9240LP datasheet, Rev5,

Biographies

LINDA BJÖRK IS AN ENGINEER WITH SOUTHWEST RESEARCH INSTITUTE. SINCE JOINING SWRI IN 2002, MS. BJÖRK HAS GAINED EXTENSIVE EXPERIENCE IN VERIFICATION TESTING OF SPACE FLIGHT HARDWARE WORKING ON NASA’S DEEP IMPACT AND GLAST MISSIONS. MS. BJÖRK IS CURRENTLY THE LEAD DESIGNER FOR THE COMBINED ELECTRONICS UNIT, (CEU), DIGITAL BOARD, WHICH WILL BE FLOWN ON THE NASA INTERSTELLAR BOUNDARY EXPLORER MISSION, (IBEX). MS. BJÖRK RECEIVED HER BACHELORS DEGREES IN SPACE ENGINEERING AND APPLIED ELECTRONICS, AND A MASTERS DEGREE IN SPACE ENGINEERING FROM UMEÅ UNIVERSITY IN SWEDEN.

Steven Persyn is a principal research engineer with Southwest Research Institute. Prior to joining SwRI, Mr. Persyn was a semiconductor process engineer, building CMOS and bipolar products for the SONY corporation. This experience included the manufacturing of ASICs, as well as testing and yield improvements of ASICs. Since joining SwRI, Mr. Persyn has developed custom hardware for a PCI / VME bridge and test card, as well as a cPCI to VME bridge and backplane. In addition, Mr. Persyn was a lead board engineer for hardware on the SWIFT XRT, Deep Impact, Orbital Express, and GLAST space programs and responsible for final integration, testing, and delivery of SWIFT XRT, Deep Impact, and GLAST electronics boxes. Mr. Persyn has recently been the system engineer on the GLAST and KEPLER programs. Mr. Persyn received his bachelors in electrical engineering from the University of Texas at Austin, and received his masters degree in electrical engineering at the University of Texas at San Antonio.

Mark Johnson is a senior research engineer with Southwest Research Institute. Mr. Johnson has over twenty-seven years experience in computer chip architecture, design, simulation, synthesis, documentation, debug, verification, release and hardware support. Since joining SwRI in September of 2001, Mr. Johnson has designed electronic subsystems for Deep Impact (DI), GLAST Burst Monitor (GBM), NPOESS Preparatory Project (NPP), and Orbital Express (OE). He has also performed in an advisory role for the REX design on the New Horizons project. Previous to joining SwRI, Mr. Johnson was a consultant for several firms including Lucent/Bell Labs, IBM, Cirrus Logic, Alcatel, NSC, AMD, and a few start-ups. He began his career with sixteen years at IBM designing microprocessor adapters. Mr. Johnson has a bachelor’s degree in mathematics and electrical engineering from the University of Vermont.

Kelly Smith is the Electromechanical Systems section manager within the Department of Space Systems. He served as the program manger and mechanical lead engineer for the GBM program. He has served as mechanical lead for several of the recent department avionics programs, including Deep Impact, SWIFT, ROSETTA IES and ACeS. He is currently the mechanical lead on the MSL RAD instrument to be included on the MSL Rover. Mr. Smith has a bachelors degree in mechanical engineering from Texas A&M University and a masters degree in mechanical engineering from Stanford University.

Buddy Walls is the section manager for the Computer Technology Section within the Department of Space Systems. Mr. Walls has developed space flight hardware for the IMAGE, QuikSCAT, Coriolis, Rosetta and Swift missions. Mr. Walls was the technical lead for the Deep Impact Spacecraft Control Unit development, and is currently serving as the program manager for the Kepler Spacecraft Control Avionics development. Mr. Walls has a bachelors and masters degree in electrical engineering from Oklahoma State University.

Michael Epperly is a program manager with Southwest Research Institute in San Antonio, Texas. Prior to joining Southwest Research, Mr. Epperly spent 13 years at Westinghouse Electric Space Division (now Northrop Grumman) on the Defense Meteorological Satellite Program and other defense related space programs. Mr. Epperly started working at SwRI in 1996 managing hardware programs for IMAGE, New Millenium DS-1, ROSETTA, QuikSCAT, ICESat, Coriolis, Swift, CALIPSO, Deep Impact and Orbital Express. He is currently managing C&DH hardware development for MSL-RAD, RAISE, NPP and IBEX. Mr. Epperly received his bachelors in electrical engineering from the University of Texas, and three masters degrees in electrical engineering, computer science, and systems engineering from the Johns-Hopkins University.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download