RIT Senior Design Project 10662



RIT Senior Design Project 10662D3 Engineering Camera PlatformDesign Review November 6, 2009Time: Friday November 6, 2009 9:00 am to 11:00 amLocation: RIT Campus. Building 9 Room 4435 Project TeamGregory HintzSamuel SkalickyJeremy GreeneJared BurdickMichelle BardAnthony PerroneAdvisorsBob Kremens (RIT)Philip Bryan (RIT) Scott Reardon (D3 Engineering)Kevin Kearney (D3 Engineering) Table of Contents TOC \o "1-3" \h \z \u 1Introduction PAGEREF _Toc245107680 \h 41.1Summary PAGEREF _Toc245107681 \h 41.2System Model PAGEREF _Toc245107682 \h 41.3Detailed System Model PAGEREF _Toc245107683 \h 52Customer Needs PAGEREF _Toc245107684 \h 73Engineering Specifications PAGEREF _Toc245107685 \h 83.1System Engineering Specifications PAGEREF _Toc245107686 \h 83.2Sub- System Engineering Specifications PAGEREF _Toc245107687 \h 94FPGA Board PAGEREF _Toc245107688 \h 114.1Hardware Description Plan (Software design) PAGEREF _Toc245107689 \h 114.1.1Goal PAGEREF _Toc245107690 \h 114.1.2Description PAGEREF _Toc245107691 \h 114.1.3FPGA PAGEREF _Toc245107692 \h 114.1.4DSP/OEM PAGEREF _Toc245107693 \h 124.1.5Analysis PAGEREF _Toc245107694 \h 134.2FPGA System Speed Analysis PAGEREF _Toc245107695 \h 144.2.1Reason for not having enough analysis in this area PAGEREF _Toc245107696 \h 144.2.2Analysis PAGEREF _Toc245107697 \h 144.2.3Correlation PAGEREF _Toc245107698 \h 164.2.4Discussion of Calculations PAGEREF _Toc245107699 \h 184.2.5Conclusion PAGEREF _Toc245107700 \h 185The Connector Board and External Interfaces PAGEREF _Toc245107701 \h 195.1Overview PAGEREF _Toc245107702 \h 195.2Design Decisions PAGEREF _Toc245107703 \h 195.3Current Design PAGEREF _Toc245107704 \h 206Inertial Navigation System (INS) PAGEREF _Toc245107705 \h 226.1Overview PAGEREF _Toc245107706 \h 226.2Details PAGEREF _Toc245107707 \h 226.3Software PAGEREF _Toc245107708 \h 227Chassis Interfaces PAGEREF _Toc245107709 \h 247.1Needs PAGEREF _Toc245107710 \h 247.2Specifications PAGEREF _Toc245107711 \h 247.2.1Aircraft Specifications PAGEREF _Toc245107712 \h 247.2.2Electronics Specifications PAGEREF _Toc245107713 \h 257.3Design PAGEREF _Toc245107714 \h 258Vibration Damping PAGEREF _Toc245107715 \h 268.1Needs PAGEREF _Toc245107716 \h 268.2Considerations PAGEREF _Toc245107717 \h 268.2.1Frequencies of Aircraft PAGEREF _Toc245107718 \h 268.2.2Allowable Vibration in Image PAGEREF _Toc245107719 \h 268.2.3Component Resonant Frequencies PAGEREF _Toc245107720 \h 278.3Approach PAGEREF _Toc245107721 \h 288.4Chassis Design PAGEREF _Toc245107722 \h 298.4.1Phase 1: Individual Compartments PAGEREF _Toc245107723 \h 298.4.2Phase 2: Assure Component Scale PAGEREF _Toc245107724 \h 308.4.3Phase 3: Detailed design to allow for realistic thermal, vibrational, and spatial analysis PAGEREF _Toc245107725 \h 318.4.4Phase 4: Final Mechanical Design PAGEREF _Toc245107726 \h 329Environmental Management PAGEREF _Toc245107727 \h 349.1Heat PAGEREF _Toc245107728 \h 349.1.1Major sources of heat generation inside chassis PAGEREF _Toc245107729 \h 349.1.2Heat Transfer models PAGEREF _Toc245107730 \h 349.2Heat Transfer analysis, a radiation model PAGEREF _Toc245107731 \h 359.2.1Assumptions PAGEREF _Toc245107732 \h 359.2.2Analysis PAGEREF _Toc245107733 \h 359.2.3Variables PAGEREF _Toc245107734 \h 359.3Heat Transfer analysis, a conductive model PAGEREF _Toc245107735 \h 369.4Heat Transfer analysis, a combined mode approach PAGEREF _Toc245107736 \h 3710Other Environmental Considerations: Condensation PAGEREF _Toc245107737 \h 3910.1Dew Point analysis PAGEREF _Toc245107738 \h 3911Mounting PAGEREF _Toc245107739 \h 4111.1Internal Mounting PAGEREF _Toc245107740 \h 4111.1.1Electronics mounting PAGEREF _Toc245107741 \h 4211.1.2Optics mounting PAGEREF _Toc245107742 \h 4311.2External Mounting PAGEREF _Toc245107743 \h 4412Appendix PAGEREF _Toc245107744 \h 4512.1Connector Board Schematic PAGEREF _Toc245107745 \h 4512.1CameraLink? to D3 Chip Schematic PAGEREF _Toc245107746 \h 46IntroductionSummaryThe customer, D3 Engineering, desired that we integrate supplied components into an environment-ready, flight-capable package that can record and transmit multi-spectral ground images and associated INS data. This solution should be capable of (if not initially configured for) processing that data in some way, including, but not limited to, compositing images from multiple spectrums and “stamping” image data with real time INS data. System ModelFigure 1: Black Box model of System The system will have up to 4 cameras and lens mounted internally to the bottom side of the enclosure that will input the data into the enclosure. Processed images with the corresponding INS data will be sent out through the FastEthernet port. CameraLink? and Gigabit Ethernet will also act as inputs for external cameras. -104775400050Detailed System Model Figure 2: System Model of customer supplied parts and the basic configuration of the system.The system is divided into the main parts for the purpose of design and referencing later on. The Electronics Enclosure is used to house all the electronics. This unit is separate from the camera module to better allow for expandability and to help control the temperature and environment differences between the two. The Electronics System is made up of the OEM DSP Board, Novatel OEM Board, FPGA Board, Connector Board and the external connectors. The OEM DSP board is a digital signal processing board that the customer has designed. This board already has the capabilities of basic image processing including compression and resolution modification, as well as INS integration. The Noatel OEM Board is customer supplied multi-frequency GNSS Receiver. This board is being using to control the input GPS signal used to detect the location of the camera module. This is otherwise known as GPS. The FPGA Board, otherwise referred to as the Processor Board is where most all of the processing and routing will take place. The FPGA will be located on this board along with internal memory to control the signals coming from the cameras and do basic processing of the signals. The FPGA will act as a switch between all the units of the system and store the necessary data onto the SSD SATA hard drive for access at a later time. The connector board acts as a medium between the FPGA board and the external connectors. Major power regulation will take place on the board along with a conversion between CameraLink? and D3 protocols to allow for easier processing on the FPGA board. The Camera Enclosure houses up to 4 Cameras using the D3 camera protocol. For this project we will only be required to test the system for 2 but the customer would like to be able expand later to 4. Customer Needs Use Supplied Components10MP Visual CameraIR Camera1 of the 2 Inertial Navigation Systems depending on availability NovAtel OEM Board OEMV3NovAtel OEM Board OEMV2OEM Camera Processing BoardInterface to single 10Mpixel Camera through proprietary “D3 Camera” connector.Interface to single Thermal Camera through Camera Link Interface. Capture 10MP data at 1FPSCapture the Thermal Camera data synchronized with the 10Mpixel camera. Capture INS data and store to match corresponding photos. Accept data from auxiliary external cameras and INS unitsMake data overlay and processing possible on-boardOutput data from the supplied OEM Board connection for real-time viewingStore data internally during flight using a SSD SATA drive. Package must include mounting and space necessary for four cameras. Package everything (except for the IR camera) to protect it against the environment and to minimize the size. “Everything” Includes:(4) visual cameras and their lenses(1) INS sensor(1) OEM Camera Processing BoardAny other components necessary for operationPosition images for ground observationsMake cameras separable from the processing hardwareInterface package to a light passenger aircraft.Engineering SpecificationsSystem Engineering SpecificationsConstraintsThe system shall use the supplied 10MP Visual Band Camera at 1FPs.The system shall use the supplied CameraLink Camera at 30FPS.The system shall use the supplied Inertial Navigation System.The system shall use the supplied OEM Camera Processing Board. InterfacesThe system shall interface with the customer’s proprietary software. The system shall be powered from an external source. The system shall position the cameras with unobstructed line of sight in a direction perpendicular to the direction of flight on the bottom side of the airplane. The system shall connect to a programming interface for hardware reconfiguration.The system shall connect to two external cameras and one external INS module.PhysicalThe system shall not exceed 8” x 6” by 7.5” tall. The system shall weigh no more than 15lbs. The cameras enclosure must be able to be removed from the electronics. EnvironmentalThe system shall operate in the following environment:Temperature-50°C to 45°CHumidity90% or lessAltitude10,000 ft (3048m)Shock and VibrationPer RTCA DO-160The system shall limit EMI emission according to MIL-810G Standard.ConfigurabilityThe system shall enable configuration of the camera interfaceThe system shall enable configuration of image compressionThe system shall enable configuration of INS interfaces CapacityThe system shall store up to 20 minutes worth of image data. The system shall be able to house 4 cameras at a time. ProcessingThe system shall store the raw data from the cameras and the INS data to allow for access before the next mission. The system shall process the images and output the data to the SSD Hard drive within 10 seconds of the picture being taken. The system shall transmit the low resolution images within 10 seconds of the pictures being taken out of the OEM Board’s 10/100 connector. Sub- System Engineering SpecificationsPackage Given the environmental conditions defined in the System Engineering Specifications, the packaging shall maintain the following internal environment for the electronic components:Temperature0°C to 70°CHumidity< 60%Shock and VibrationPer RTCA DO-160The packaging shall contain EMI per MIL 810-GThe packaging shall not degrade the optical performance of the cameras.The packaging shall enable replacement of any component within 10 minutes, given a trained user, without custom tools.The packaging shall have the following connectors available externally:10/100 ConnectorGigabit Ethernet Connector (x2)CameraLink Connector (x2)Power Connector (TBD)DB-9 ConnectorRCA Video OutThe packaging without electronics installed shall weigh no more than 10lbsThe external packaging shall not exceed 16” x 6.5” by 5”. (Length x Width x Height)The packaging shall mount fixed t a flat plate.Processor BoardThe Processor board must be able to be reconfigured for multiple different operations by a technical expert. Different Operations to include but not limited toMore than 2 inputs used. Overlay of the Camera inputs with the corresponding INS data. Change in rate inputs are selectedThe processor board shall not exceed 5” x 6” x Height TBD. The processor board shall weigh no more than 2lbs. InterfacesThe processor board shall be connected to the connector board byHigh speed headerHeader for the powerThe processor board shall be connected to the following external connectors directly (not from connector board):Gigabit Ethernet (x2)The processor board shall be powered by GND, +5V, 12V, 3V.The processor board shall be securely mounted to the connector board and the OEM board to not allow for movement between the boards.The Processor Board must be able to communicate with the supplied OEM Camera Processing Board and the onboard storage device simultaneously through high speed connections.Connector BoardThe Connector board shall have the following inputs.High Speed to Processor BoardCameraLink from External ConnectionPower Input cable from External ConnectionPower Output to Processor boardThe connector board will have provide the necessary power for the system by outputting the necessary power based on the following voltages with an input voltage ranging from +9 to +36V DC in order to supply adequate power ranges for devices in the system. GND +5, TBD W+12V, TBD W+3V, TBD WThe connector board must be able to fit inside the electronics enclosure and shall not exceed 5” x 6” x height TBD.The connector board must not exceed 2lbs. Storage UnitThe Storage unit must be commercially available solution. The Storage unit must be upgradable and able to remove and replaced within 5 minutes, given a trained user, without custom tools. The Storage unit must be equal or greater than 250 GB in order to allow enough storage of required data. (6.a)The Storage unit shall weigh no more than 1 lb. The Storage unit shall not exceed 4x6x3.The Storage unit shall be a solid state drive that will be able to withstand the environment of the electronics enclosure. The Storage unit shall be connected to the Processor board using SATA. FPGA BoardHardware Description Plan (Software design)GoalThis system will be able to receive data from multiple sources simultaneously and process that data. It will then save that data to a storage medium and pass the data along to another processor for compression and real time viewing. DescriptionTo accomplish this goal the main system is broken up into two main parts: FPGA and DSP. The FPGA will accept input from various sources and process it in different ways. The DSP will receive input from the INS device and also images from the FPGA and do some processing on them. FPGAFigure 3: Software Model of FPGAleft-5715The FPGA will have software to describe specific components. This software has the potential to be turned into a physical device (ASIC) in the future to speed up the design and reduce costs. The major functions of the FPGA is to accept input, process this data, and to export the data. It will receive data from three different types of devices: D3 imagers, Gigabit Ethernet enabled cameras, and a DSP co-processor. The data from these devices will be controlled with a single main component that will function mainly as the Central Dispatch. Data will be exported two ways, through the DSP and to the hard drive.The D3 imagers will each have a control module to receive the data in multiple cycles (as related to the external device, not the internal clock). It will initiate a collection cycle when instructed from the Central Dispatch. Upon receipt of this data in full, it will pass the data off to an intermediary storage location (DDR Memory). It will notify the Central Dispatch of its data deposit, and reset to wait for the next signal to begin data collection from the Central Dispatch. The Gigabit Ethernet camera controllers will function similar to that of the D3 imager controllers. However they will have a complex component to handle the translation to/creation of TCP/IP packets and send this data out at gigabit speeds(1000Mbps) using a special Ethernet controller. The hard drive will communicate over SATA I speeds. This requires a special controller to convert to serial communication and use specific hardware to keep the data rates high. This component will receive commands from the Central Dispatch to get data from the memory and store it on the hard drive. The image-processing element will be designed as a pipeline to increase efficiency. This will allow us to replicate the element many times over to either speed up processing or to complete various levels of processing. This component will receive signals from the Central Dispatch to get data from memory (DDR) and then process it. Upon completion of processing, it will return this data back to memory and wait for the next instruction from Central Dispatch.The last and essential component is the Central Dispatch (CD) that will control the other components. This element will contain various lists (Queues) of data that either needs to be processed, to be written to the hard drive or be sent to the DSP (for more processing or compression). It will also, at regular intervals, prompt the data input components to get data from their devices. These components will notify the CD of the location of the data acquired and the CD will add this to-do item to its various lists. DSP/OEMFigure 4: Software Model of DSP-15049558420The DSP uses some signals similar to that of the D3 imagers, however it also has some high-speed data lines as well. This OEM controller module will be a two-way communication platform. It will send images to the DSP for more processing and return, and for compression to be sent out the Fast Ethernet connection to a user for real time viewing. This module will also serve the main entry point for spatial orientation data into the FPGA. Upon receipt the INS data will be stored directly to the hard drive. Image data may or may not have already been processed in the FPGA and thus will either be sent to be processed, or to be stored on the hard drive. The co-processor for this design is the DSP, which will perform some functions already designed by the customer as well as perform new ones. Some of these will include image processing, image compression, INS integration, and real time video out. The DSP functions more like a CPU with discrete (hard-wired) components that process instructions run on a kernel (core). The DSP will need to have functions written for communication with the FPGA (including sending info back and forth) and specifically what type of processing to do and where the data will go next. AnalysisBased on the breakdown above this software package will be implementable in the time allotted for this project. The customer has acknowledged the large scope of the project and has designated certain levels of completeness for this area of design. Ideally they would like the entire software design to be implemented. However in the time allotted and considering the vast amount of other work needed to be complete on top of this, they have decided to stress the hardware design of this project more. Thus, the minimum requirement is to pass data through the FPGA to the DSP/OEM Board they have provided. This implementation will be trivial to implement considering the background of the team (use the FPGA Speed analysis document as a reference of background). We as a team will at least lay the foundation for this software by creating VHDL Entities for the various components described above. This will allow other engineers to come in behind and fill in the holes with little overall knowledge of the project, or even outsourcing the work to be a trivial task. Based upon the breakdown, two main groups of elements exist. Both of them can be implemented in parallel and will even be done so in different development environments using differing languages. For example, the FPGA software design will be implemented in VHDL; the DSP will be written using the C programming language.This will allow the team to work efficiently and will allow the work to be completed within the time limits. The image processing elements will probably not be implemented due to time constraints and the availability of algorithms and Ips(intellectual properties) for these components. This will allow multiple types of processing schemes to be implemented and interchanged based on the need and the planned use of this design. We will implement them as straight through Entities with no function inside. This will allow us to simulate processing by delaying the data and will enable us to calculate the types of processes that can be implemented based on the time available to keep the images streaming in “real time” at the rates required. The various IO components will all be based off of one “parent” component that will encompass the majority of the functions in the camera modules. This will again reduce the total amount of time needed to complete the design. From this “child” component we will be able to tailor each component for the specific type of IO or add other elements such as Ethernet controllers to interface between other types of camera inputs. Using these plans, the design will be able to be completed in the time allotted. Additionally the customer will be able to configure this design to their specific needs now, and in the future using the same hardware. This accomplishes the desire of the customer for adaptability and an acceptable lifetime. This configurability is inherently built into the design of the FPGA and the system interconnecting the various devices. The design of the software is influenced by this design in hardware. FPGA System Speed Analysis Reason for not having enough analysis in this area?In a normal industrial situation when using an FPGA, all operations are coded, simulated, and tested prior to choosing an FPGA model. However, due to our time constraints, coding and testing before designing the circuit board is not feasible. Hence, we are shooting for the best and using the resources we have to, as best as possible, approximate the needs of our operations on this FPGA. AnalysisMembers of this team have worked with FPGAs before and have done research into specific applications used on them. One such implementation is that of a Neural Network. Software in the past has used thousands of resources of computing power to simulation the human brain’s learning capabilities. Recently the design of an artificial network of cells in the brain is being used. The following will display this work and explain how the desing of our system will be fast enough to hand the operational data rates .Traditionally, the term neural network had been used to refer to a network or Figure 5: Neural network Modelright173990circuit of biological neurons. The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes. Artificial neural networks are made up of interconnecting artificial neurons (programming constructs that mimic the properties of biological neurons). These networks may either be used to gain an understanding of their biological counterpart, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The real, biological nervous system is highly complex and includes some features that may seem superfluous based on an understanding of artificial networks.? The cognitive modeling field involves the physical or mathematical modeling of the behavior of neural systems; ranging from the individual neural level (e.g. modeling the spike response curves of neurons to a stimulus), through the neural cluster level (e.g. modeling the release and effects of dopamine in the basal ganglia) to the complete organism (e.g. behavioral modeling of the organism's response to stimuli). For more detailed info about neural networks please see external sources such as Wikipedia or GCCIS Faculty.? ?? ??? ?This design shown below in Figure 1 was implemented on a Digilent Basys Spartan 3E-100 development board. It currently performs the function of XOR; however has no heuristic coding to help out. Instead it uses the theory touched on above to learn acceptable and unacceptable responses to input. This is not a simple design. ?Results of this network are outputs within 10% of the goal values for "high" and "low". These values are relatable to digital logic values in hardware and can be used as such. The results of the implementation in VHDL using the Xilinx WebPack ISE are shown below in table 1.? Table 1: Device Utilization Table for Spartan 3E-100Total resources used: 19%Device Utilization SummaryLogic UtilizationUsedAvailableUtilizationNumber of Slice Latches221,9201%Occupied Slices19796020%4 input LUTs3601,92018%Logic32817%Route-thru323%Number of bonded IOBs101089%MULT18X18SIOs44100%Table 2: Data SpeedsNode Levels Time Data In 29 13ns Data Out 2 5ns Looking at the data from Table 1 we can see how little resources this design took up, a little less than 19% overall. This is not much considering the complexity of the design and the simplicity of the Spartan 3E-100 FPGA. For example the last row of table 1 shows that only 4 built in hardware optimized 18x18 bit multipliers exist in this device and all are used. For sure this design does a substantial amount of math to calculate the weights on the connections between neurons and more ALUs will need to be created using general purpose slices as can be seen in the usage of slices being about 20%.? This design was very fast and was able to process changes in inputs very quickly. We can see that 29 levels of logic were needed to be traversed from the input to the end of the processing pipeline. However this only takes 13ns, and we can calculate the frequency of this to be 76.92MHz. This number implies that we can handle 76 million changes of inputs per second on each pin that has input to this logic design. From this point in the design, to get to the output is only 2 levels and takes 5ns (or a speed of 200MHz). This is quite speedy on a device that we released to be a low price, slowest device in the product line in 2005. There are not many people that are still using computers considered "hot items" in 2005 (think first dual core processors, Celeron D ...). If the internal logic was simpler, or more pipelined we would be able to reduce these speeds in final design.? Correlation Unfortunately we have not been able to simulate this design on a Spartan 6LX75T core. We have been having problems getting the Xilinx software setup. However we can attempt to build a ratio between the two devices. For example, the table (table 3) on the next page compares directly, the resources that both devices have available. As you can see there is a large increase on "on die" resources available. This does not mean that we can do the same task with fewer resources, just that the design will take up less space on this model. We must keep in mind that although there are more resources available, a larger number of resources will be used to route the data throughout the device. However this model is built using a different process of creating transistors, and thus will be able to run faster since the length of distance between individual elements is smaller on the same chip (up to a limit). We can see that the standard clock speed of the Spartan 6 is 2.5 times that of the tested unit. This will directly correlate to the speed of the device running from input to output. However this is not a 1:1 ratio, we cannot say it is due to the clock speed being higher since will run at 2.5x that of the Spartan3. But we can say that that speed available with the specific sequential calculations will be higher than that to some degree above 1 and below 2.5x. There are various factors to consider in this calculation, including the amount of actual processing that will be done (currently unknown), the clock speed of the other components in the design (memory, SATA, etc.). However, due to the amount of resources available and the low-end of the speed spectrum we know (250MHz), we can estimate that the speed will be closer to the 2x.? For example, say we are processing a pixel and we need to do X amount of math that takes Y seconds. Let’s say that Y is longer than 1/30th of a seconds (IR camera picture rate), this will cause a problem that we cannot process pictures fast enough. We will solve this not by making the FPGA faster, or have a higher clock speed, but instead by parallelizing the math done in X. This will reduce the time Y needed to process the pixel. Now, we realize that only so much can be done in parallel, and that we will not be utilizing all of the resources of this large FPGA. We will solve this next problem by parallel processing of multiple pixels simultaneously. There’s, no reason why we wouldn’t be able to just say copy the image pipeline above (A) and create another one called B. From this we can say after the pixel data is received for the next image, and pipeline A is not done yet, we can start the processing in B. This technique is very scalable, so the amount of processing we do is directly proportional to the number of parallel pipelines we will need to process all of the data in "real time".?Table 3: Comparing Spartan ModelsResource TypeSpartan 3E-100Spartan 6LXT75% more than 3E-100Slices96011,6621210%LUTs1,92046,6482430%Latch/FFs1,92093,2964860%User I/O1082962740%Diff. Pairs40148370%18x18SIO/DSP48a slices41323300%Functional Clock Speed100MHz250MHZ250%Size of transistors90nm process45nm process50%????????????? Initially we will just use the FPGA as a large and super fast MUX. This will allow us to connect multiple cameras to the OEM board. The complexity of logic is much less than that of the Neural Network simulated in Test condition 1. This implies that the Speed from input pin to output pin will be less (however much so is irrelevant for this analysis, since 76MHz more than meets our needs) and here's why: ?? ?Visual Camera?? ?10MP image size =?3664 x 2748 = 10,068,672 pixels ????1 pixel = 12 bits of data (width of interface is 16 bits, so this works and gets passed in 1 clock cycle) ?? ?Clock cycles / pixel = 1?? ?Number of images / second = 1?? ?Speed = cycles / pixel * pixels * images / second = 1 * 10,068,672 * 1?? ?Total required speed to get data in = 10,068,672Hz (or 10.07MHz)?? ?IR Camera?? ?1.3MP image size = 640 x 480 = 307,200 pixels?? ?1 pixel = 8 bits of data (width of interface is 16 bits, so this works and gets passed in 1 clock cycles, max)?? ?Clock cycles / pixel = 1?? ?Number of images / second = 30?? ?Speed = cycles / pixel * pixels * images / second = 1 * 307,200 * 30 = 9,216,000?? ?Total required speed to get data in =?9,216,000Hz (or 9.2MHz) ?? ??? ?INS Unit?? ?Total size of data / capture = unkown Total size of data / second = 1kB ?? ?RS-232 rate of device = unkown (serially so 1 bit / cycle)?? ?# of captures = 30 (same as fastest image rate)?? ?Total data needed to be recieved / second = #bits / #captures = 8000 / 30 = 270 bits= 34 bytes of data?? ?Speed = 8000 bits / second = 8000 baud **Note: Due to the fact that the INS will use the RS-232 standard, rates are calculated in Baud, or gross bit rate expressed in bits/second. ?? ?Discussion of CalculationsThe calculations above show that the max speed for any type of camera connected to this system will be about 10MHz. From this we can ultimately say that yes, the image data will be able to be received in “real time” without causing any slow downs in the system. Now, you may be thinking if we use 6 cameras and each camera runs at 10MHz well that 60MHz to capture all the data. Well, you’re forgetting that the FPGA can do more than 1 thing at once. We can design capture components for each camera, individually running at 76MHz. Now, for sure all of this data will have to be funneled into the same place (DDR, SSD, OEM board). However once the data is inside the FPGA things run much faster as shown from the calculation of speed from the FPGA to the external pins at a speed of 200MHz. This speed is over and above what we would need for 6 cameras (60MHz). ConclusionFrom the calculations above, the FPGA is able to handle getting the data in. Using the strategies above we can appropriately parallel process to get all of the data in and processed successfully in the required time. The internal speeds of the FPGA allow plenty of time to organize the data and send it out to the various devices.The Connector Board and External InterfacesOverviewThe Connector Board will provide the electrical hardware interface for several of the system’s key requirements, namely the Inertial Measurement Unit (IMU) connector, the two Camera Link camera connectors and the primary system power supply connector. To facilitate the Camera Link cameras, circuitry will be included to convert from the Camera Link data format to the D3 Imager interface. Additionally, three of the six voltage levels needed by the system will be derived from the power supply using voltage regulator and monitoring circuitry.The system specification provided by the customer instructed that two Gigabit Ethernet (GigE) connectors, one 10/100 Ethernet connector and one RCA output connector should be included, in addition to those already listed.Design DecisionsInitially, some confusion was had over where each of the respective connectors would be mounted, with the original specification implying that all external connectors would be mounted on the Connector Board directly, although it was not clear how practical or necessary such an arrangement would be. After some deliberation, the decision was reached that the GigE, 10/100 Ethernet and RCA interfaces need not be associated with the Connector Board and that only the Camera Link connector should be mounted on the Connector Board itself. The reasoning for each decision follows:The hardware driver for the two GigE connectors will be implemented in the FPGA, which was selected with this specific need in mind, with a facilitating IC chip on the FPGA Board. Since the hardware interface for GigE is relatively complicated, minimizing the number of transitions from board to wire, etc., is desirable. Therefore, the GigE connectors will be mounted on the FPGA Board. The GigE connectors will appear side-by-side, beneath the Camera Link connectors.The 10/100 Ethernet and the RCA interfaces will be handled by the customer provided OEM Board, which has integrated support and connectors for both. Passing these interfaces from the OEM board to the FPGA Board and then to the Connector Board would be wasteful of board space and unnecessarily gratuitous. Therefore, a direct link from the OEM Board to panel mounted 10/100 Ethernet and RCA connectors will be used.The IMU connector will be panel mounted in consideration of the large size of the connector and the limited space on the Connector Board – a DB-17 connector with integrated coaxial lines was found to be the most readily available option that satisfied both the needs for RS-232 support and for a coaxial data line. The various data lines will connect to a board header on the Connector Board with a direct link to the FPGA Board. Since the IMU interface uses the RS-232 standard to receive and respond to commands, a Null Modem configuration will be implemented on the Connector Board, to allow for proper communication.The two Camera Link connectors will be mounted directly onto the Connector Board, in part because they will require non-trivial format conversion circuitry to operate with the FPGA and OEM Board, but also because both will fit conveniently given the system size limitations. The Camera Link to D3 Imager format conversion circuitry will be placed on the Connector Board to save space on the FPGA Board; to limit the complexity of design, as the Camera Link uses differential signaling, which would complicate transfer from the Connector Board to the FPGA Board; and to minimize the number of data lines that must be transferred from the Connector Board to the FPGA Board, as the D3 Imager format will require fewer data lines than the Camera Link would.The power supply connector will be panel mounted with a direct connection to the Connector Board via a board header. To prevent interference, which the circuitry on the FPGA Board may produce, the incoming power (9 to 36 Volts) will be switched to 12V, 5V and 3.3V on the Connector Board. These three voltage lines will be linked to the FPGA Board, where they will be further dropped to 2.5V, 1.8V and 1.2V. In addition to concerns over interference, the voltage regulators and monitors for some of the voltage lines will be placed on the Connector Board to save space on the FPGA Board.Current DesignAfter determining the features and needs of the Connector Board, the block diagram in Figure 6 was developed to illustrate the design in an easily digestible form.The block diagram was followed by development of a schematic for the Connector Board (Appendix A, Figure A1), which was itself derived from a circuit provided by the customer (Appendix A, Figure A2), which converts from the Camera Link format to the D3 Imager format. The circuitry for the power regulator is discussed in greater detail in its respective portion of this document.In addition to the schematic developed, after all major Connector Board elements were determined (integrated circuits, connectors, etc.), an accurately sized “scarecrow” diagram was drawn to provide a realistic estimate of the minimum board size necessary for the Connector Board (Figure 7).Figure 6: Block diagram of the Connector Board.Figure 7: Accurately sized "scarecrow" diagram of the Connector Board.Inertial Navigation System (INS)OverviewAn INS combines location and orientation data retrieved from a Global Navigation Satellite System (GNSS) and an Inertial Measurement Unit (IMU). The best known GNSS is the U.S.’s Global Positioning System (GPS), although the Russian GLONASS system is also operational and several other systems are in development. An IMU is a local device that determines what direction the device is facing and at what speed it is moving.DetailsThe original customer specification called for a complete INS to be implemented, although cost and availability concerns have scaled the requirement back to supporting just the GNSS, with plans to include an IMU if one can be acquired that satisfies our needs at a cost acceptable to the customer.The customer has also specified that the NovAtel OEMV brand of GNSS receivers should be used, with the most likely choice coming down to either the OEMV-2 (Figure 3) or the OEMV-3 (Figure 4), although the final decision is ongoing. While both support data transfer using an RS-232 serial communications bus, in conjunction with a coaxial data line, the power requirements are for each are not the same: The OEMV-2 requires a 3.3 +5%/-3% VDC power supply, whereas the OEMV-3 requires a 4.5 to 18 VDC power supply. If we do not have a final selection prior to finalizing our designs, both possibilities will need to be accounted for.SoftwareSoftware interaction with the OEMV board will be performed using a routine in the customer supplied OEM Board Digital Signal Processor (DSP). Communication with the OEMV involves sending commands over the RS-232 serial communications bus in either ASCII (plain text, verbose), abbreviated ASCII (plain text, non-verbose) or Binary (ones and zeros) format. Responses are sent from the OEMV board in like format, with some data being sent over a coaxial data line.While detailed software specifications have yet to be written, a flowchart has been developed to illustrate the fundamental routine for interfacing with the OEMV board (Figure 5).Figure 8: OEMV-2 GNSS receiver board.Figure 9: OEMV-3 GNSS receiver board.Figure 10: Basic routine for interfacing with the OEMV GNSS board.Chassis InterfacesNeedsThe chassis must meet two distinct sets of interface criteria. Firstly, it must interface with the electronics it was designed to enclose. It must house them internally as well as provide for their interface with external equipment. The customer also requires that the cameras be separable from the processing electronics. Secondly, it must interface with two airframes: those of a conventional single-propeller passenger plane as well as that of the RIT UAV Airframe “C” design. The former represents the flight platform that will actually be used by the customer. The latter represents a “loose” constraint; the RIT airframe is used merely as a means to obtain a tighter size restriction. The customer desires that the module be “compact”, so Airframe “C” is used as the design spec for “compact”.SpecificationsAircraft SpecificationsThe specifications required to meet the above stated needs are summarized in the table below. Table 4: Differences between Aircraft and UAV to consider when designing the chassisSmall Passenger AircraftRIT U.A.V. Airframe “C”Must be mountable to a flat plateMust be mountable to a flat wooden baseSmaller that a person; Approx 2’ x 2’ x 5’6” tallLess than 16” x 6.5” x 5” tallLess than 150lbs (68kg)Less than 15lbs (6.8kg)Though all efforts will be made to conform to the requirements of Airframe “C”, failure to comply will not render the device unsuccessful. Electronics SpecificationsThe chassis must contain the following electronic connectors on its outside surface:2 Gigabit Ethernet1 10/100 Ethernet2 CameraLink 1 DB-9 w/ Integral Coaxial1 RCA Video1 3-Pin Amphenol Power 1 Indicator LED1 USBThe chassis must house the following components:4 D3 Cameras with lenses*1 D3 OEM Image Processor Board1 FPGA-Based Controller Board1 Custom built connector board1 NovAtel OEMV-3 GPS Board1 MicroStrain 3DM-series IMU1 2.5” Solid-state hard driveInterconnecting wiring for the above*Lenses are Linos Mevis-C 16mmDesignThe finalized design of the chassis can be seen in section 8.4.4. It encloses all of the components mentioned in section 7.2.2, dividing them into two sub-sections. The optics sub-section encompasses the cameras, lenses, and the IMU device, and serves as the base of the device. This larger enclosure determines the footprint of the device. The electronics sub-section encloses the remainder of the capture and processing hardware, and sits atop the optics section during normal use. The sub-sections are separable, and both capable of being mounted safely when separated. Overall, the system measures 10.25” long x 6” wide x 6.5” tall and weighs 10.9 pounds. These specifications meet the requirements of the RIT UAV Airframe “C” payload in every measure but height. Because the height exceeds the allowable measure by a full 1.5”, it is not practical to attempt to meet the requirement by optimizing this configuration. Vibration DampingNeedsThe device needs to maintain structural integrity as well as take clear pictures while being subjected to normal aircraft vibrations. ConsiderationsFrequencies of Aircraft The vibration character of the aircraft that the module will be mounted in is taken to be the vibration spectrum defined in RTCA DO-160, Section 8. Test category “S” is used, representing a standard, fixed-wing aircraft during normal operation. The device is assumed to be mounted in Aircraft Zone 2 (“Instrument Panel, Console & Equipment Rack”).The vibration character is a sinusoidal wave, defined by varying frequencies and peak-to-peak amplitudes. The vibration spectrum is as follows:FrequencyAmplitude5 – 15 Hz0.1 in15 – 55 Hz0.01 in55 – 500 HzLinear Range: 0.01 in @ 55 Hz, 0.0002 in @ 500 HzAllowable Vibration in ImageThe clarity of images is quantified by a quality colloquially known as “smear”, measured in terms of the number of pixels worth of distance that the aircraft moves when the shutter is open. Smear is a function of aircraft speed and altitude, the image angle of the lenses in use, and the aircraft shutter speed. Smear is desired to be less than half a pixel, and smear must be less than one pixel.The following flight parameters represent normal aircraft operation:Max. Aircraft Speed:70 knotsAltitude Range:1000 – 5000 ft.Lens Focal Length: 25 mmLens Image Angle:38.1°An additional speed was accounted for due to the vibration. This speed was calculated as the derivative of the equation of motion of vibration. Thus, the known position equation: X=A sin(Ft) yields the equation of speed: X'=FA cos(Ft). Thus, the maximum speed due to vibration is FA. For calculation purposes, 0.1 in. and 500 Hz were used, which translate to a maximum speed of 1.27 m/sCalculated smear due to aircraft speed was found to be 0.66 pixels at 1000ft and 0.13 pixels at 5000ft. Smear reduces to a desired limit at 1500ft, achieving a smear of 0.44 pixels. Smear due to vibration induced speeds was found to be an order of magnitude less than that due to aircraft speed. At 1000ft, smear is 0.023 pixels, reducing to 0.0047 pixels at 5000ft. Component Resonant FrequenciesMost parts are small and rigid, causing resonant frequencies to be higher than will likely be experienced. This category is not to be forgotten, as more detailed data may reveal otherwise, but it is very low on the scale of actual risk of damaging the system.ApproachPrior to the analysis of Sec. 8.2.2, it was thought that some form of mechanical isolation would be required in order to eliminate image distortion. However, it has come to light that the vibration character of the aircraft will not significantly distort image quality. Electronics EnclosureOptical EnclosureRubber Damping MountsFigure 11: Solid Works model of SystemIn light of new developments, this design is no longer necessary. Not only are isolating mounts no longer necessary, but it is possible that they will amplify the vibration the chassis actually experiences. At present, the design will call for a flat flange to mount directly to a flooring surface. Chassis DesignPhase 1: Individual CompartmentsFigure 12: Showing the first phase of design of Individual componentsPhase 2: Assure Component ScaleFigure 13: Solid Works model showing Dummy Solids of Major Electrical componentsAbove: Dummy solids of major electrical components fit snugly in a 5.5” x 5.5” x 3” space. Below, four customer-specified lenses fit well into an enclosure of similar cross section.Figure 14: Solid Works model showing Camera LensesPhase 3: Detailed design to allow for realistic thermal, vibrational, and spatial analysisFigure 15: Solid works Model showing System designVibrational dampers mount on center flangeInterial grooves allow for component mounting to be modular, changeable, and secureStock extruded enclosure reduces build time “Stacked” configuration maintains thermal separation at a minimal “footprint”Phase 4: Final Mechanical DesignVibration damping omitted due to recent analysisCustom machined to minimize size Separates Optics from Electronics, minimizes footprintInternals mounted on sub-frame, renders primary enclosure adaptable to new configurations and different hardware.Environmental ManagementHeat Major sources of heat generation inside chassisHard drive about the half the heat produced comes from thisVoltage RegulatorFPGADSPHeat Transfer modelsAll models are for steady state RadiationModel as black bodyFrom electronics to chassisFrom chassis to external environmentConductionFrom electronics into chassisHeat travels through ground planes on boardsMay route heat through standoffsFrom Chassis to external environmentThrough chassis material into external environmentConvectionNegligibleMinimal (if any) moving airHeat Transfer analysis, a radiation modelAssumptionsBoard stackChassis wallq chassisq board-T chassis-T boards-T ambient-T chassisTreat enclosure as a black box radiating heat to the outside airNeglect ConvectionProtected from moving airNeglect ConductionOnly connected to airplane by small vibration dampersTemperature at surface of chassis = temperature inside of chassisAll Power consumed by electronics is output as heat radiating out = .89 Heat radiating from chassis is 50% of heat radiating from boards (qc = .5qb)AnalysisBlack box radiation :σεTchassis4= σεTambient4+qchassisAchassisσεTboards4= σεTchassis4+qboardsAboardsCombined, this gives:σεTboards4= σεTambient4+qchassisAchassis+qboardsAboards=σεTambient4+.5qboardsAchassis+qboardsAboards =σεTambient4+qboards(12Achassis+1Aboards)Re-arranged and solved for Tboards:Tboards= 4σεTambient4+qboards(12Achassis+1Aboards)VariablesT chassis – temperature of chassis (K)T Ambient – temperature of environment outside of chassis (K) Tambient= (Tground (c)- altitude (m)*6.5/1000)+273)q chassis – heat radiating from chassis (W)q boards – heat radiating from electronics inside chassis (W)A chassis – surface area of chassis (m2)A boards – surface area of electronics (m2) - Stefan–Boltzmann constant: 5.67 x 10-8 (Wm-2K-4) - emissivity of the chassis From spreadsheet solution of the above equations: Board dimensions?chassis dimensions atmosheric conditionsheat transferAboards (m)? a total (m2)t??air?????Pgen (w)Tboards Final (°C)0.03677412?0.1132052621850154.91989230.03677412?0.11320526241.1550158.45584160.03677412?0.11320526218100231.57864360.03677412?0.11320526241.15100233.74861410.03677412?0.113205262181555.416331050.03677412?0.11320526241.151563.06223622From this, if we have much more than 15 watts of heat generated by the electronics, the electronics will overheat. Having only radiation is a worst case scenario. Looking at another model may prove worthwhile. electronicsChassis wallE outEgenT s2-T ambientTs1Ein Heat Transfer analysis, a conductive modelAssume: Ts1 = ambient temperature inside the chassisTs2 =ambient temperature outside the chassisTreat Egen as if the energy is being generated in walldx= thickness of wallAccording to conservation of energy: Ein+Egen-Eout=Estored (when we assume steady state, Estored=0)In our case:- Egen is all the heat generated by the electronics which is equal to the power required by them -Ein is all the heat entering the chassis wall-Eout is all the heat exiting the chassis wallMathematically defining the terms:Egen = I2REin=-KAdTS1/dxEout=- KAdTS2/dxWhich gives:-KAdT/dx+ I2R-(- KAdT/dx)=0I2R = -KA(TS2-TS1/dx)Solving for TS1:Ts1 = (I2R/KA)dx+Ts2Variables:I2R : power supplied to electronicsTs1: (temp on inside of chassis wall) Ts2: (temp on outside of chassis wall)K: thermal conductivity of materialA: total surface area of chassisdx: thickness of chassis wallsfrom spreadsheet solution of the above equations:box dimensions (cm)?atmosheric conditions?heat transfer? a total (cm2)s (m)t??air?????k,AL (W/mK)Pgen (w) Ts1, box (?k) Ts1, box (?C)0.11320.0122183050218.1766702-54.82332980.11320.0123183050318.176670245.176670240.11320.01221830100218.3533405-54.64665950.11320.01231830100318.353340545.353340470.11320.0122183015218.0530011-54.94699890.11320.0123183015318.053001145.05300107This model is not all encompassing either. Looking at a more detailed conduction and radiation model may be valuable. Heat Transfer analysis, a combined mode approach Modeling as a thermal circuit where:Tb-Ta = q ReqAnd Req = Reach element The conductive resistances are:electronicsChassis wallT aTbBoard mounting platestandoffsradiationradiationRc1=X1/K1A1 X1 = length of standoffs K1=conductivity of standoffs A1= Surface area of standoffsRc2=X2/K2A2X2 = depth of mounting plate K2=conductivity of mounting plate A2= Surface area of mounting plateRc3=X2/K3A3X3 = depth of wall K3=conductivity of wall A3= Surface area of wall The radiation resistances are: Rr1= 1/hr1Ar1Rr2=1/hr2Ar2Ar1= Surface area of boardsAr2= Surface area of chassishr1=(Tb+Tin) (T2b+T2in)hr2=(Tb+Ta) (T2b+T2a)Assuming:Tin= .7TbTwall= Tb=.89= 5.67 x 10-8 (Wm-2K-4)*I’m not quite sure how to model radiation without knowing Tb, since this is the value I’m trying to find…Conduction between board mount and chassis wallThe equivalent resistance looks like this:Radiation between wall and external environmentRadiation between wall and external environmentConduction between board mount and chassis wallTbTaStandoffs: 4 conductive resistances in parallelRadiation between electronics and board mountconduction between chassis wall and external environment Radiation between electronics and board mountconduction between chassis wall and external environmentStandoffs: 4 conductive resistances in parallel Req=14Rc1+ (Rr1) + (Rc2)+(Rc3)+(Rr2)Using the model:Tb = qReq+TaWe get:Tb = q[((X1/K1A1)/4)+ (Rr1) + (X2/K2A2)+( X3/K3A3)+(Rr2) )]+TaThough I’m not entirely sure how to model it, I expect the solution to be between those of the first two models…Sources:Fundamentals of heat and mass transfer by Incorpora et alHeat Transfer: a practical approach by Yunis A. Cengel For lapse rate in the troposphere: uwsp.edu and Emissivity coefficients: Environmental Considerations: CondensationDew Point analysisDew point, the temperature at which water will condense on a surface, is a function of ambient temperature and relative humidity. Knowing the dew point will tell whether additional steps should be taken to control temperature and/or humidity inside the chassis. Dew point temperature is given as:constantstemp rangeTn (?C)m0 to 50243.1217.62-40 to 0272.6222.46Td= Tn? lnRH100%+m?TTn+Tm-lnRH100%- m?TTn+TVariables: Td - Dew point (C)T - Ambient temperature (C)RH - Relative humidity (%)m - Temperature range dependant constant (non-dimensional)Tn - Temperature range dependant constant (C)From spreadsheet solution of the above equation (and info I currently have): Condensation may be a problem.RHT air (°c)dew point (°c)50-32.6096-38.958175047.02184834.0113221-33.978152-70.29261There are two main options, including a heater to keep temperature inside the chassis above the dew point and reducing humidity inside the chassis to lower the dew point inside the chassis (a common method for doing this is to use silica gel) Comparison of methods:??Heater system silica gel packweightrankwith weightrankwith weighteffective at reducing/preventing condensation5210210simplicity in manufacturing/implementation3-1-313allows for flexibility as heat requirements change41428allows for air/water tight enclosure22424total:??21?31From this comparison: a compact silica gel pack of an appropriate size appears to be the best choice.Source for dew point info: sensor company explaining how to use info from sensors to calculate dew pointMountingInternal MountingWeighing different techniques against each other:Considerationsheat management techniquesweightcentral conductive mounting backbone with weightmount each piece to chassis seperatelywith weightseparate optics and electronics packageswith weightsingle package to house all componentswith weighteffective at removing heat515151515effective at retaining heat515151515simplicity in manufacturing326-2-6-1-313allows for flexability as heat requirements change414141400meets temperature needs of specific components50000210-1-5allows for air/water tight enclosure212141214total:22122312 Electronics mountingConsiderations:Relations of components to one anotherBoard to board mountingCable connectionsExternal connections/locationsCustomer’s desire for reconfigure-ability of hardware componentsNeed for easy access to FPGA for reprogrammingNeed for easy access of hard driveManufacturability Beginning concepts:mounted on frame of bars Cantilever shelf mounted Beginning refinement:-Handle on top for ease of access-top and bottom plates fit into pre-cut grooves in chassis walls-four stands connect top and bottom plates-electronics connect to top and bottom platesSide viewTop view, gray portion represents top/bottom platesFurther refinement:Main board stack and ssd mounted to frame that slides into enclosure via. Grooves cut into inner surfaceMain board stack Optics mountingRefined conceptsHole in center of plate for cablesRoom for 4 cameras and additional space to mount other items (IMU?)Side supports fit into inner grooves in chassisExternal MountingDesign is still incomplete. AppendixConnector Board SchematicFigure A1: Schematic for the Connector Board. The voltage regulator and monitor, represented by the lowermost symbol in the bottom right-hand corner, are discussed in greater detail in the Power section of this document.CameraLink? to D3 Chip SchematicFigure A2: Customer provided circuit to convert from the Camera Link format to the D3 Imager format used by the FPGA and the OEM Boards. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download